id stringlengths 2 115 | author stringlengths 2 42 ⌀ | last_modified timestamp[us, tz=UTC] | downloads int64 0 8.87M | likes int64 0 3.84k | paperswithcode_id stringlengths 2 45 ⌀ | tags list | lastModified timestamp[us, tz=UTC] | createdAt stringlengths 24 24 | key stringclasses 1 value | created timestamp[us] | card stringlengths 1 1.01M | embedding list | library_name stringclasses 21 values | pipeline_tag stringclasses 27 values | mask_token null | card_data null | widget_data null | model_index null | config null | transformers_info null | spaces null | safetensors null | transformersInfo null | modelId stringlengths 5 111 ⌀ | embeddings list |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
EdwardLin2023/ASVP_ESD | EdwardLin2023 | 2023-04-19T02:51:35Z | 30 | 1 | null | [
"license:cc-by-4.0",
"region:us"
] | 2023-04-19T02:51:35Z | 2023-04-18T08:46:32.000Z | 2023-04-18T08:46:32 | ---
license: cc-by-4.0
---
# The Audio, Speech, and Vision Processing Lab - Emotional Sound Database (ASVP - ESD)
## ABOUT
The Audio, Speech, and Vision Processing Lab - Emotional Sound Database (ASVP - ESD)
was created by School of Electronic and Information Engineering, South China University of Technology.
## CHOSEN EMOTIONS
13 emotions were chosen:
1. boredom,sigh
2. neutral,calm
3. happy,laugh,gaggle
4. sad,cry
5. angry,grunt,frustration
6. fearful,scream,panic
7. disgust,dislike,contempt
8. surprised,gasp,amazed
9. excited
10. pleasure
11. pain,groan
12. disappointment,disapproval
13. breath
## ORGANISING THE DATABASE
### Speech Statistic
| Duration Statisitc | Duration |
| -------------------------- |:-------------------------------------------------:|
| Num. of Clips | 2,150 |
| Total Duration | 13347.835 seconds = 222.464 minutes = 3.708 hours |
| Max Dur | 32.235 seconds |
| Min Dur | 0.287 seconds |
| Mean Dur | 6.208 seconds |
| Std. Dur | 3.839 seconds |
| Num. of Clips > 30 seconds | 1 |
| Emotion | Num. of Clips |
| ------------------------------- |:-------------:|
| 01: boredom, sigh | 81 |
| 02: neutral, calm | 657 |
| 03: happy, laugh, gaggle | 154 |
| 04: sad, cry | 268 |
| 05: angry, grunt, frustration | 385 |
| 06: fearful, scream, panic | 63 |
| 07: disgust, dislike, contempt | 90 |
| 08: surprised, hasp, amazed | 144 |
| 09: excited | 136 |
| 10: pleasure | 15 |
| 11: pain, groan | 25 |
| 12: disappointmrnt, disapproval | 132 |
| 13: breath | 0 |
| Emotion Intensity | Num. of Clips |
| ----------------- |:-------------:|
| 01: normal | 1,783 |
| 02: high | 367 |
| Gender | Num. of Clips |
| ----------- |:-------------:|
| 01: male | 1,224 |
| 02: female | 926 |
| Age Range | Num. of Clips |
| ---------- |:-------------:|
| 01: >65 | 65 |
| 02: 20~65 | 1,914 |
| 03: 3<20 | 80 |
| 04: <3 | 91 |
| Language | Num. of Clips |
| ------------- |:-------------:|
| 01: Mandarin | 937 |
| 02: English | 621 |
| 03: French | 175 |
| 04: Others | 417 |
### Non-Speech Statistic
| Duration Statisitc | Duration |
| -------------------------- |:-------------------------------------------------:|
| Num. of Clips | 5,484 |
| Total Duration | 14438.117 seconds = 240.635 minutes = 4.011 hours |
| Max Dur | 25.810 seconds |
| Min Dur | 0.141 seconds |
| Mean Dur | 2.633 seconds |
| Std. Dur | 2.720 seconds |
| Num. of Clips > 30 seconds | 0 |
| Emotion | Num. of Clips |
| ------------------------------- |:-------------:|
| 01: boredom, sigh | 392 |
| 02: neutral, calm | 253 |
| 03: happy, laugh, gaggle | 878 |
| 04: sad, cry | 383 |
| 05: angry, grunt, frustration | 339 |
| 06: fearful, scream, panic | 799 |
| 07: disgust, dislike, contempt | 473 |
| 08: surprised, hasp, amazed | 808 |
| 09: excited | 109 |
| 10: pleasure | 273 |
| 11: pain, groan | 706 |
| 12: disappointmrnt, disapproval | 70 |
| 13: breath | 1 |
| Emotion Intensity | Num. of Clips |
| ----------------- |:-------------:|
| 01: normal | 4,693 |
| 02: high | 791 |
| Gender | Num. of Clips |
| ----------- |:-------------:|
| 01: male | 2,919 |
| 02: female | 2,565 |
| Age Range | Num. of Clips |
| ---------- |:-------------:|
| 01: >65 | 73 |
| 02: 20~65 | 5,224 |
| 03: 3<20 | 100 |
| 04: <3 | 87 |
| Language | Num. of Clips |
| ------------- |:-------------:|
| 01: Mandarin | 512 |
| 02: English | 3,258 |
| 03: French | 109 |
| 04: Others | 1,605 |
## References
1. Dejoli Tientcheu Touko Landry, Qianhua He, Haikang Yan and Yanxiong Li. (2020). ASVP-ESD:A dataset and its benchmark for emotion recognition using both speech and non-speech utterances. Global Scientific Journals, 8(6), 1793-1798.
| [
-0.6588921546936035,
-0.7200600504875183,
0.20112337172031403,
0.5400562286376953,
-0.35650792717933655,
0.28920072317123413,
-0.0886073112487793,
-0.27831149101257324,
1.1715437173843384,
0.13277381658554077,
-0.9045742154121399,
-0.6872096657752991,
-1.023510217666626,
0.2752956449985504... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
metaeval/xnli | metaeval | 2023-05-23T12:38:22Z | 30 | 0 | null | [
"region:us"
] | 2023-05-23T12:38:22Z | 2023-04-24T09:51:47.000Z | 2023-04-24T09:51:47 | Human annotated part of xnli
```
@InProceedings{conneau2018xnli,
author = {Conneau, Alexis
and Rinott, Ruty
and Lample, Guillaume
and Williams, Adina
and Bowman, Samuel R.
and Schwenk, Holger
and Stoyanov, Veselin},
title = {XNLI: Evaluating Cross-lingual Sentence Representations},
booktitle = {Proceedings of the 2018 Conference on Empirical Methods
in Natural Language Processing},
year = {2018},
publisher = {Association for Computational Linguistics},
location = {Brussels, Belgium},
}
``` | [
-0.19055476784706116,
-0.14476898312568665,
0.3482834994792938,
0.08090750128030777,
-0.17428924143314362,
0.03988850116729736,
-0.3854239881038666,
-0.9954451322555542,
0.614825963973999,
0.46748438477516174,
-0.7469439506530762,
-0.5220374464988708,
-0.13141001760959625,
0.49626788496971... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
norabelrose/truthful_qa | norabelrose | 2023-04-24T22:32:59Z | 30 | 0 | null | [
"license:apache-2.0",
"region:us"
] | 2023-04-24T22:32:59Z | 2023-04-24T22:25:30.000Z | 2023-04-24T22:25:30 | ---
license: apache-2.0
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
roszcz/pianofor-ai-sustain | roszcz | 2023-07-22T19:53:35Z | 30 | 0 | null | [
"region:us"
] | 2023-07-22T19:53:35Z | 2023-04-30T14:46:29.000Z | 2023-04-30T14:46:29 | ---
dataset_info:
features:
- name: notes
struct:
- name: duration
sequence: float64
- name: end
sequence: float64
- name: pitch
sequence: int64
- name: start
sequence: float64
- name: velocity
sequence: int64
- name: midi_filename
dtype: string
- name: record_id
dtype: int64
- name: user_id
dtype: int64
- name: user
dtype: string
splits:
- name: train
num_bytes: 1187031441
num_examples: 5756
download_size: 465426973
dataset_size: 1187031441
---
# Dataset Card for "pianofor-ai-sustain"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5761563181877136,
-0.2834049463272095,
0.22916075587272644,
0.3341481685638428,
-0.08939936757087708,
0.014816615730524063,
-0.104264035820961,
-0.07283277064561844,
0.7140198349952698,
0.49800339341163635,
-0.9967790246009827,
-0.7774326205253601,
-0.38768479228019714,
-0.1654669195413... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
J-Mourad/MNAD.v2 | J-Mourad | 2023-05-16T12:22:21Z | 30 | 0 | null | [
"region:us"
] | 2023-05-16T12:22:21Z | 2023-05-16T11:53:19.000Z | 2023-05-16T11:53:19 | # About the MNAD Dataset
The MNAD corpus is a collection of over **1 million Moroccan news articles** written in the modern Arabic language. These news articles have been gathered from 11 prominent electronic news sources. The dataset is made available to the academic community for research purposes, such as data mining (clustering, classification, etc.), information retrieval (ranking, search, etc.), and other non-commercial activities.
## Dataset Fields
- Title: The title of the article
- Body: The body of the article
- Category: The category of the article
- Source: The Electronic Newspaper source of the article
## About Version 1 of the Dataset (MNAD.v1)
Version 1 of the dataset comprises **418,563** articles classified into 19 categories. The data was collected from well-known electronic news sources, namely Akhbarona.ma, Hespress.ma, Hibapress.com, and Le360.com. The articles were stored in four separate CSV files, each corresponding to the news website source. Each CSV file contains three fields: Title, Body, and Category of the news article.
The dataset is rich in Arabic vocabulary, with approximately 906,125 unique words. It has been utilized as a benchmark in the research paper:
```"A Moroccan News Articles Dataset (MNAD) For Arabic Text Categorization". In 2021 International Conference on Decision Aid Sciences and Application (DASA).```
This dataset is available for download from the following sources:
- Kaggle Datasets : [MNADv1](https://www.kaggle.com/datasets/jmourad100/mnad-moroccan-news-articles-dataset)
- Huggingface Datasets: [MNADv1](https://huggingface.co/datasets/J-Mourad/MNAD.v1)
## About Version 2 of the Dataset (MNAD.v2)
Version 2 of the MNAD dataset includes an additional **653,901** articles, bringing the total number of articles to over 1 million (**1,069,489**), classified into the same 19 categories as in version 1. The new documents were collected from seven additional prominent Moroccan news websites, namely al3omk.com, medi1news.com, alayam24.com, anfaspress.com, alyaoum24.com, barlamane.com, and SnrtNews.com.
The newly collected articles have been merged with the articles from the previous version into a single CSV file named ```MNADv2.csv```. This file includes an additional column called "Source" to indicate the source of each news article.
Furthermore, MNAD.v2 incorporates improved pre-processing techniques and data-cleaning methods. These enhancements involve removing duplicates, eliminating multiple spaces, discarding rows with NaN values, replacing new lines with "\n", excluding very long and very short articles, and removing non-Arabic articles. These additions and improvements aim to enhance the usability and value of the MNAD dataset for researchers and practitioners in the field of Arabic text analysis.
This dataset is available for download from the following sources:
- Kaggle Datasets : [MNADv2](https://huggingface.co/datasets/J-Mourad/MNAD.v2)
- Huggingface Datasets: [MNADv2](https://huggingface.co/datasets/J-Mourad/MNAD.v2)
## Citation
If you use our data, please cite the following paper:
```bibtex
@inproceedings{MNAD2021,
author = {Mourad Jbene and
Smail Tigani and
Rachid Saadane and
Abdellah Chehri},
title = {A Moroccan News Articles Dataset ({MNAD}) For Arabic Text Categorization},
year = {2021},
publisher = {{IEEE}},
booktitle = {2021 International Conference on Decision Aid Sciences and Application ({DASA})}
doi = {10.1109/dasa53625.2021.9682402},
url = {https://doi.org/10.1109/dasa53625.2021.9682402},
}
``` | [
-0.6397600769996643,
-0.32981765270233154,
0.07580100744962692,
0.3839731812477112,
-0.3885685205459595,
0.12293975055217743,
-0.0031341295689344406,
-0.31331032514572144,
0.3972591161727905,
0.5015926957130432,
-0.3266599178314209,
-0.9204922914505005,
-0.7011600732803345,
0.2440214902162... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Slep/LAION-RVS-Fashion | Slep | 2023-06-06T04:27:24Z | 30 | 5 | null | [
"size_categories:1M<n<10M",
"language:en",
"license:mit",
"fashion",
"visual search",
"arxiv:2306.02928",
"region:us"
] | 2023-06-06T04:27:24Z | 2023-05-31T10:00:32.000Z | 2023-05-31T10:00:32 | ---
license: mit
language:
- en
tags:
- fashion
- visual search
pretty_name: LAION — Referred Visual Search — Fashion
size_categories:
- 1M<n<10M
---
# **LAION — Referred Visual Search — Fashion**
*Introduced in **Weakly-Supervised Conditional Embedding for Referred Visual Search***
**[CRITEO AI Lab](https://ailab.criteo.com)** x **[ENPC](https://imagine-lab.enpc.fr)**
[Simon Lepage](https://simon-lepage.github.io), Jérémie Mary, [David Picard](https://davidpicard.github.io)
[[`Paper`](https://arxiv.org/abs/2306.02928)]
[[`Demo`](https://huggingface.co/spaces/Slep/CondViT-LRVSF-Demo)]
[[`Code`](https://github.com/Simon-Lepage/CondViT-LRVSF)]
[[`BibTeX`](#citing-the-dataset)]
---
## **Composition**
LAION-RVS-Fashion is composed of images from :
- **[LAION 2B EN](https://huggingface.co/datasets/laion/laion2B-en)**
- **[LAION 2B MULTI TRANSLATED](https://huggingface.co/datasets/laion/laion2B-multi-joined-translated-to-en)**
- **[LAION 1B NOLANG TRANSLATED](https://huggingface.co/datasets/laion/laion1B-nolang-joined-translated-to-en)**
These images have been grouped based on extracted product IDs. Each product in the training set is composed of at least a single image (isolated product), and a complex image (scene). We added categorical metadata and BLIP2 captions to each product. Please see the [samples](#samples) and refer to [our paper](https://arxiv.org/abs/2306.02928) for additional details.
|Split|Products|Distractors|
|-:|:-:|:-:|
|Train|272,457|-|
|Valid|400|99,541|
|Test|2,000|2,000,014|
**Total number of training images :** 841,718.
## **Samples**
<table style='text-align:center'>
<tbody>
<tr>
<td></td>
<td><img src="https://huggingface.co/datasets/Slep/LAION-RVS-Fashion/resolve/main/assets/97969.0.jpg" style="height:200px"></td>
<td><img src="https://huggingface.co/datasets/Slep/LAION-RVS-Fashion/resolve/main/assets/97969.1.jpg" style="height:200px"></td>
<td><img src="https://huggingface.co/datasets/Slep/LAION-RVS-Fashion/resolve/main/assets/219924.0.jpg" style="height:200px"></td>
<td><img src="https://huggingface.co/datasets/Slep/LAION-RVS-Fashion/resolve/main/assets/219924.1.jpg" style="height:200px"></td>
</tr>
<tr>
<td><b>Categories</b></td>
<td colspan=2>Neck</td>
<td colspan=2>Lower Body</td>
</tr>
<tr>
<td><b>BLIP2 Captions</b></td>
<td colspan=2>a scarf with multi-coloured stripes</td>
<td colspan=2>stella pants - dark suede</td>
</tr>
<tr></tr>
<tr>
<td></td>
<td><img src="https://huggingface.co/datasets/Slep/LAION-RVS-Fashion/resolve/main/assets/72317.0.jpg" style="height:200px"></td>
<td><img src="https://huggingface.co/datasets/Slep/LAION-RVS-Fashion/resolve/main/assets/72317.1.jpg" style="height:200px"></td>
<td><img src="https://huggingface.co/datasets/Slep/LAION-RVS-Fashion/resolve/main/assets/108856.0.jpg" style="height:200px"></td>
<td><img src="https://huggingface.co/datasets/Slep/LAION-RVS-Fashion/resolve/main/assets/108856.1.jpg" style="height:200px"></td>
</tr>
<tr>
<td><b>Categories</b></td>
<td colspan=2>Feet</td>
<td colspan=2>Bags</td>
</tr>
<tr>
<td><b>BLIP2 Captions</b></td>
<td colspan=2>neon green patent leather heels with studs</td>
<td colspan=2>the burberry small leather bag is brown and leather</td>
</tr>
</tbody>
</table>
## **Attributes**
- **URL**, **WIDTH**, **HEIGHT**, **punsafe**, **pwatermark**, **language**: Original LAION fields. Please refer to their repository.
- **TEXT**: Text originally associated with the image.
- **ENG_TEXT** : Translated version for MULTI/NOLANG, copy of TEXT for EN.
- **TYPE**: SIMPLE (isolated products), COMPLEX (scenes), PARTIAL_COMPLEX (zommed-in scenes)
- **PRODUCT_ID**: Product identifier, allows to group together images depicting the same product.
- **INDEX_SRC**: ID of parquet file originally storing this image.
- **CATEGORY**: Categories of the products - `Bags, Feet, Hands, Head, Lower Body, Neck, Outwear, Upper Body, Waist, Whole Body` for the products, and `NonClothing` for some distractors.
- **blip2_caption1, blip2_caption2**: [BLIP2-FlanT5XL](https://huggingface.co/Salesforce/blip2-flan-t5-xl)-generated captions.
We also release `bootstrap_IDs.pkl`, the file used to generate the bootstrapped results of the paper. `test_subsets` is composed of [product IDs](https://github.com/Simon-Lepage/CondViT-LRVSF/blob/b660d82b5775de417ba81ac846b6df004b31eb75/lrvsf/test/metrics.py#L229), while `dist_{N}_subsets` are [row indices](https://github.com/Simon-Lepage/CondViT-LRVSF/blob/b660d82b5775de417ba81ac846b6df004b31eb75/lrvsf/test/metrics.py#L248).
---
## Citing the dataset
To cite our work, please use the following BibTeX entry :
```
@article{lepage2023condvit,
title={Weakly-Supervised Conditional Embedding for Referred Visual Search},
author={Lepage, Simon and Mary, Jérémie and Picard, David},
journal={arXiv:2306.02928},
year={2023}
}
``` | [
-0.5512239336967468,
-0.4466738700866699,
0.1316310465335846,
0.23860608041286469,
-0.38933566212654114,
-0.11273569613695145,
-0.15790757536888123,
-0.7160012722015381,
0.5486432909965515,
0.0781102105975151,
-0.8169456720352173,
-0.7205908894538879,
-0.3913228511810303,
0.157853946089744... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
KaiLv/UDR_SNLI | KaiLv | 2023-06-21T12:49:04Z | 30 | 0 | null | [
"region:us"
] | 2023-06-21T12:49:04Z | 2023-06-21T12:48:42.000Z | 2023-06-21T12:48:42 | ---
dataset_info:
features:
- name: idx
dtype: int64
- name: label
dtype:
class_label:
names:
'0': entailment
'1': neutral
'2': contradiction
- name: sentence
dtype: string
- name: len_sentence
dtype: int64
splits:
- name: test
num_bytes: 747502
num_examples: 3262
- name: train
num_bytes: 28963424
num_examples: 131062
- name: validation
num_bytes: 750070
num_examples: 3272
- name: debug
num_bytes: 22092624
num_examples: 100000
download_size: 17825058
dataset_size: 52553620
---
# Dataset Card for "UDR_SNLI"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.3176209628582001,
-0.11824607849121094,
0.15402866899967194,
0.18497470021247864,
-0.14944522082805634,
0.06831468641757965,
0.2974219024181366,
-0.07305821776390076,
0.9634283185005188,
0.40069666504859924,
-0.7878220081329346,
-0.6146535873413086,
-0.4127698540687561,
-0.1113609150052... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
VictorSanh/LrvInstruction | VictorSanh | 2023-06-30T02:39:43Z | 30 | 6 | null | [
"region:us"
] | 2023-06-30T02:39:43Z | 2023-06-28T21:44:15.000Z | 2023-06-28T21:44:15 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
FredZhang7/toxi-text-3M | FredZhang7 | 2023-07-20T21:33:29Z | 30 | 8 | null | [
"task_categories:text-classification",
"task_categories:token-classification",
"task_categories:zero-shot-classification",
"size_categories:1M<n<10M",
"language:ar",
"language:es",
"language:pa",
"language:th",
"language:et",
"language:fr",
"language:fi",
"language:hu",
"language:lt",
"lan... | 2023-07-20T21:33:29Z | 2023-06-28T23:28:34.000Z | 2023-06-28T23:28:34 | ---
license: apache-2.0
task_categories:
- text-classification
- token-classification
- zero-shot-classification
size_categories:
- 1M<n<10M
language:
- ar
- es
- pa
- th
- et
- fr
- fi
- hu
- lt
- ur
- so
- pl
- el
- mr
- sk
- gu
- he
- af
- te
- ro
- lv
- sv
- ne
- kn
- it
- mk
- cs
- en
- de
- da
- ta
- bn
- pt
- sq
- tl
- uk
- bg
- ca
- sw
- hi
- zh
- ja
- hr
- ru
- vi
- id
- sl
- cy
- ko
- nl
- ml
- tr
- fa
- 'no'
- multilingual
tags:
- nlp
- moderation
---
[A demo for a model finetuned on this and other datasets](https://huggingface.co/spaces/aivance/one-for-all-toxicity-v3)
This is a large multilingual toxicity dataset with 3M rows of text data from 55 natural languages, all of which are written/sent by humans, not machine translation models.
The preprocessed training data alone consists of 2,880,667 rows of comments, tweets, and messages. Among these rows, 416,529 are classified as toxic, while the remaining 2,463,773 are considered neutral. Below is a table to illustrate the data composition:
| | Toxic | Neutral | Total |
|-------|----------|----------|----------|
| [multilingual-train-deduplicated.csv](./train/multilingual-train-deduplicated.csv) | 416,529 | 2,464,138 | 2,880,667 |
| [mulilingual-validation(new).csv](./validation/mulilingual-validation(new).csv) | 10,613 | 19,028 | 29,641 |
| [multilingual-test.csv](./test/multilingual-test.csv) | 14,410 | 49,402 | 63,812 |
Each CSV file has three columns: `text`, `is_toxic`, and `lang`.
Supported types of toxicity:
- Identity Hate/Homophobia
- Misogyny
- Violent Extremism
- Hate Speech
- Offensive Insults
- Sexting
- Obscene
- Threats
- Harassment
- Racism
- Trolling
- Doxing
- Others
Supported languages:
- Afrikaans
- Albanian
- Arabic
- Bengali
- Bulgarian
- Catalan
- Chinese (Simplified)
- Chinese (Traditional)
- Croatian
- Czech
- Danish
- Dutch
- English
- Estonian
- Finnish
- French
- German
- Greek
- Gujarati
- Hebrew
- Hindi
- Hungarian
- Indonesian
- Italian
- Japanese
- Kannada
- Korean
- Latvian
- Lithuanian
- Macedonian
- Malayalam
- Marathi
- Nepali
- Norwegian
- Persian
- Polish
- Portuguese
- Punjabi
- Romanian
- Russian
- Slovak
- Slovenian
- Somali
- Spanish
- Swahili
- Swedish
- Tagalog
- Tamil
- Telugu
- Thai
- Turkish
- Ukrainian
- Urdu
- Vietnamese
- Welsh
<br>
### Original Source?
Around 11 months ago, I downloaded and preprocessed 2.7M rows of text data, but completely forgot the original source of these datasets...
All I remember is that I downloaded datasets from everywhere I could: HuggingFace, research papers, GitHub, Kaggle, SurgeAI, and Google search. I even fetched 20K+ tweets using the Twitter API.
Recently, I came across 6 datasets, so I remembered to credit them below.
Known datasets:
- tomekkorbak/pile-toxicity-balanced2 (HuggingFace)
- datasets/thai_toxicity_tweet (HuggingFace)
- datasets/ethos (HuggingFace)
- inspection-ai/japanese-toxic-dataset (GitHub)
- mathigatti/sexting-dataset (GitHub)
- omar-sharif03/BAD-Bangla-Aggressive-Text-Dataset (GitHub)
I manually collected and wrote 100 rows of data.
<br>
### Limitations
Limitations include:
- All labels were rounded to the nearest integer. If a text was classified as 46%-54% toxic, the text itself might not be noticeably toxic or neutral.
- There were disagreements among moderators on some labels, due to ambiguity and lack of context.
- When there're only URL(s), emojis, or anything that's unrecognizable as natural language in the "text" column, the corresponding "lang" is "unknown".
Have fun modelling! | [
0.03050108067691326,
-0.4287976622581482,
0.3407324552536011,
0.458649605512619,
-0.006621215492486954,
-0.21068549156188965,
-0.14027778804302216,
-0.27894654870033264,
0.12165679782629013,
0.5726332664489746,
-0.553730845451355,
-0.9506316781044006,
-0.4921592175960541,
0.450428992509841... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
LKarlo/ncbi-virus-complete-dna-v230722 | LKarlo | 2023-07-25T02:19:38Z | 30 | 0 | null | [
"region:us"
] | 2023-07-25T02:19:38Z | 2023-07-22T07:07:49.000Z | 2023-07-22T07:07:49 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
iamshnoo/alpaca-cleaned-persian | iamshnoo | 2023-09-15T23:20:43Z | 30 | 1 | null | [
"region:us"
] | 2023-09-15T23:20:43Z | 2023-07-31T04:55:58.000Z | 2023-07-31T04:55:58 | ---
dataset_info:
features:
- name: input
dtype: string
- name: instruction
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 57273102
num_examples: 51760
download_size: 25446305
dataset_size: 57273102
---
Translated from yahma/alpaca-cleaned using NLLB-1.3B
# Dataset Card for "alpaca-cleaned-persian"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6148422956466675,
-0.8348003625869751,
0.06299024075269699,
0.10468622297048569,
-0.7205210328102112,
-0.18104948103427887,
0.0044760252349078655,
-0.6941906809806824,
0.8807668685913086,
0.8281152248382568,
-0.8922889232635498,
-0.5978785157203674,
-0.5300682187080383,
-0.0074812225066... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
nampdn-ai/tiny-orca-textbooks | nampdn-ai | 2023-09-28T02:15:06Z | 30 | 12 | null | [
"task_categories:text-generation",
"size_categories:100K<n<1M",
"language:en",
"license:cc-by-nc-sa-4.0",
"arxiv:2309.05463",
"arxiv:2305.07759",
"region:us"
] | 2023-09-28T02:15:06Z | 2023-08-04T09:44:37.000Z | 2023-08-04T09:44:37 | ---
task_categories:
- text-generation
language:
- en
pretty_name: Tiny Orca Textbooks
size_categories:
- 100K<n<1M
license: cc-by-nc-sa-4.0
---
# Textbook-like Dataset: A Comprehensive Resource for Text-Based Skills Development in Small Language Models
This dataset is a collection of **147k synthetic textbooks** designed to enhance the text-based skills of small language models. The curriculum is meticulously structured to progress from simple to complex tasks, ensuring a gradual and effective learning experience during pretraining or finetuning SLMs.
The inspiration for this dataset comes from the technical report paper, [Textbooks Are All You Need II: phi-1.5 technical report](https://arxiv.org/abs/2309.05463). The source texts incorporated in this dataset are derived from the [OpenOrca](https://huggingface.co/datasets/Open-Orca/OpenOrca) dataset, a well-known resource in the field.
Emphasizing text-based skills, this dataset serves as a practical reasoning for small language models to learn and do exercise, providing them with a diverse range of skills to learn and adapt from. The step-by-step progression mirrors the structure of a textbook, making it an ideal in-context learning sample.
### Disclaimer
While every effort has been made to ensure the accuracy of the information contained within this dataset, please note that it is provided 'as is' and without any warranties.
The use of the `textbook` field in this dataset is intended for research purposes only. You are advised to verify any information obtained from this dataset before acting upon it.
## Tiny Series
Explore the possibilities and limitations of building Small Language Models with these tiny gems of data!
- [TinyStories](https://arxiv.org/abs/2305.07759): The paper that sparked my interest in the journey of the tiny-* series.
- [tiny-codes](https://huggingface.co/datasets/nampdn-ai/tiny-codes): Collection of 1.6M short and clear code snippets that can help LLM models learn how to reason.
- [tiny-textbooks](https://huggingface.co/datasets/nampdn-ai/tiny-textbooks): 420k "things of internet" synthetic textbooks.
- [tiny-webtext](https://huggingface.co/datasets/nampdn-ai/tiny-webtext): A 6GB (4.5M records) variety of diverse webtext enriched with critical thinking methods to make unbiased English dataset.
- [tiny-lessons](https://huggingface.co/datasets/nampdn-ai/tiny-lessons): Subset of *tiny-textbooks* dataset, various lessons about "things of internet" augmented in a bite-sized textbook Markdown format.
- [tiny-bridgedict](https://huggingface.co/datasets/nampdn-ai/tiny-bridgedict): A dataset that links and transfers knowledge between English, Vietnamese, Chinese in a tiny multilingual models.
### Others small HQ datasets with textbook-like quality
- [devdocs.io](https://huggingface.co/datasets/nampdn-ai/devdocs.io): FreeCodeCamp has provided 189k comprehensive API documentation across a wide range of tech stacks and programming languages.
- [sciphi-python-textbook](https://huggingface.co/datasets/emrgnt-cmplxty/sciphi-python-textbook)
- [textbook_quality_programming](https://huggingface.co/datasets/vikp/textbook_quality_programming)
- [sciphi-textbooks-are-all-you-need](https://huggingface.co/datasets/emrgnt-cmplxty/sciphi-textbooks-are-all-you-need)
| [
-0.3361797034740448,
-0.5121669173240662,
0.22663383185863495,
-0.225455179810524,
-0.000572895398363471,
-0.15210148692131042,
-0.3118157386779785,
-0.2927893400192261,
-0.07558383792638779,
0.3754401206970215,
-0.41976043581962585,
-0.5042847990989685,
-0.05282478779554367,
0.10893185436... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
pie/cdcp | pie | 2023-11-27T09:59:34Z | 30 | 0 | null | [
"region:us"
] | 2023-11-27T09:59:34Z | 2023-08-08T10:17:53.000Z | 2023-08-08T10:17:53 | # PIE Dataset Card for "CDCP"
This is a [PyTorch-IE](https://github.com/ChristophAlt/pytorch-ie) wrapper for the
[CDCP Huggingface dataset loading script](https://huggingface.co/datasets/DFKI-SLT/cdcp).
## Data Schema
The document type for this dataset is `CDCPDocument` which defines the following data fields:
- `text` (str)
- `id` (str, optional)
- `metadata` (dictionary, optional)
and the following annotation layers:
- `propositions` (annotation type: `LabeledSpan`, target: `text`)
- `relations` (annotation type: `BinaryRelation`, target: `propositions`)
- `urls` (annotation type: `Attribute`, target: `propositions`)
See [here](https://github.com/ChristophAlt/pytorch-ie/blob/main/src/pytorch_ie/annotations.py) for the annotation type definitions.
## Document Converters
The dataset provides document converters for the following target document types:
- `pytorch_ie.documents.TextDocumentWithLabeledSpansAndBinaryRelations`
See [here](https://github.com/ChristophAlt/pytorch-ie/blob/main/src/pytorch_ie/documents.py) for the document type
definitions.
| [
-0.48055148124694824,
-0.4837416410446167,
0.31737616658210754,
0.3044881522655487,
-0.27369067072868347,
0.0010163537226617336,
-0.09502854198217392,
-0.093787282705307,
0.4176492691040039,
0.4281952679157257,
-0.5516728758811951,
-0.936758816242218,
-0.5884291529655457,
0.105949513614177... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
kranthigv/alpaca-gpt4 | kranthigv | 2023-08-22T17:33:31Z | 30 | 0 | null | [
"region:us"
] | 2023-08-22T17:33:31Z | 2023-08-22T17:33:08.000Z | 2023-08-22T17:33:08 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
KushT/reuters-21578-train-val-test | KushT | 2023-08-25T12:24:45Z | 30 | 0 | null | [
"task_categories:text-classification",
"size_categories:1K<n<10K",
"language:en",
"license:apache-2.0",
"region:us"
] | 2023-08-25T12:24:45Z | 2023-08-25T12:18:15.000Z | 2023-08-25T12:18:15 | ---
license: apache-2.0
size_categories:
- 1K<n<10K
task_categories:
- text-classification
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: text
dtype: string
- name: labels
sequence: int64
splits:
- name: train
num_bytes: 10816829
num_examples: 6988
- name: validation
num_bytes: 1178067
num_examples: 781
- name: test
num_bytes: 4513694
num_examples: 3019
download_size: 5088303
dataset_size: 16508590
language:
- en
---
Dataset from [Kaggle](https://www.kaggle.com/datasets/nltkdata/reuters/code)
The split is done on the training set using ```iterative_train_test_split``` from [scikit-multilearn](http://scikit.ml/index.html)
There are the following 90 labels.
'interest',
'groundnut-oil',
'potato',
'palmkernel',
'sun-meal',
'lei',
'cotton-oil',
'sunseed',
'sorghum',
'barley',
'dlr',
'groundnut',
'wpi',
'strategic-metal',
'livestock',
'l-cattle',
'lin-oil',
'gold',
'fuel',
'nzdlr',
'oat',
'soybean',
'hog',
'tin',
'lumber',
'bop',
'soy-oil',
'dfl',
'nkr',
'gas',
'carcass',
'silver',
'coffee',
'gnp',
'crude',
'rapeseed',
'alum',
'copper',
'housing',
'grain',
'cocoa',
'sun-oil',
'rice',
'jobs',
'rubber',
'jet',
'tea',
'retail',
'ship',
'corn',
'meal-feed',
'naphtha',
'sugar',
'rand',
'platinum',
'money-supply',
'yen',
'nickel',
'income',
'cpu',
'copra-cake',
'instal-debt',
'coconut-oil',
'cotton',
'rye',
'palm-oil',
'acq',
'wheat',
'propane',
'dmk',
'reserves',
'rape-oil',
'money-fx',
'heat',
'ipi',
'castor-oil',
'earn',
'iron-steel',
'palladium',
'coconut',
'veg-oil',
'nat-gas',
'pet-chem',
'lead',
'trade',
'cpi',
'oilseed',
'zinc',
'soy-meal',
'orange' | [
-0.4111417531967163,
-0.15411248803138733,
0.06104424595832825,
0.27249476313591003,
-0.24700601398944855,
0.5342860221862793,
-0.19000141322612762,
-0.20269280672073364,
0.43307143449783325,
0.25555896759033203,
-0.6026390790939331,
-0.6965905427932739,
-0.8810160756111145,
0.342134177684... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
DRAGOO/dataset_dyal_darija | DRAGOO | 2023-08-25T22:08:17Z | 30 | 2 | null | [
"region:us"
] | 2023-08-25T22:08:17Z | 2023-08-25T22:07:07.000Z | 2023-08-25T22:07:07 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Rams901/sql-create-context-modified | Rams901 | 2023-09-16T14:11:35Z | 30 | 1 | null | [
"region:us"
] | 2023-09-16T14:11:35Z | 2023-09-16T14:11:32.000Z | 2023-09-16T14:11:32 | ---
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: context
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1415326
num_examples: 3000
download_size: 632495
dataset_size: 1415326
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "sql-create-context-modified"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5409995317459106,
-0.6824015378952026,
0.1885443478822708,
0.22846020758152008,
-0.24723009765148163,
-0.35690805315971375,
0.029347700998187065,
-0.15373040735721588,
0.9200505018234253,
0.7557103633880615,
-0.8681156635284424,
-0.701822817325592,
-0.25841259956359863,
-0.2158649414777... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Nicolas-BZRD/English_French_Webpages_Scraped_Translated | Nicolas-BZRD | 2023-09-21T14:29:04Z | 30 | 0 | null | [
"task_categories:translation",
"size_categories:10M<n<100M",
"language:en",
"language:fr",
"license:odbl",
"webpages",
"parallel",
"parallel data",
"region:us"
] | 2023-09-21T14:29:04Z | 2023-09-21T12:54:23.000Z | 2023-09-21T12:54:23 | ---
language:
- en
- fr
license: odbl
size_categories:
- 10M<n<100M
task_categories:
- translation
tags:
- webpages
- parallel
- parallel data
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: en
dtype: string
- name: fr
dtype: string
splits:
- name: train
num_bytes: 6811772380
num_examples: 17161263
download_size: 640497280
dataset_size: 6811772380
---
# English French Webpages Scraped Translated
### Dataset Summary
French/English parallel texts for training translation models. Over 17.1 million sentences in French and English. Dataset created by Chris Callison-Burch, who crawled millions of web pages and then used a set of simple heuristics to transform French URLs onto English URLs, and assumed that these documents are translations of each other. This is the main dataset of Workshop on Statistical Machine Translation (WML) 2015 Dataset that can be used for Machine Translation and Language Models. Refer to the paper here: http://www.statmt.org/wmt15/pdf/WMT01.pdf
### Post-process
This dataset has been post-processed to remove all duplicates, empty fields and phrases containing less than 5 words.
### Original Dataset Citation
```
@InProceedings{bojar-EtAl:2015:WMT,
author = {Bojar, Ond\v{r}ej and Chatterjee, Rajen and Federmann, Christian and Haddow, Barry and Huck, Matthias and Hokamp, Chris and Koehn, Philipp and Logacheva, Varvara and Monz, Christof and Negri, Matteo and Post, Matt and Scarton, Carolina and Specia, Lucia and Turchi, Marco},
title = {Findings of the 2015 Workshop on Statistical Machine Translation},
booktitle = {Proceedings of the Tenth Workshop on Statistical Machine Translation},
month = {September},
year = {2015},
address = {Lisbon, Portugal},
publisher = {Association for Computational Linguistics},
pages = {1--46},
url = {http://aclweb.org/anthology/W15-3001}
}
``` | [
-0.346822589635849,
-0.34843066334724426,
0.46548962593078613,
0.3016657531261444,
-0.1751859039068222,
-0.0322161540389061,
-0.21086561679840088,
-0.3410691022872925,
-0.08124467730522156,
0.59679114818573,
-0.46157699823379517,
-0.7357041835784912,
-0.5688075423240662,
0.6659489274024963... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
goendalf666/sales-conversations | goendalf666 | 2023-10-04T20:39:04Z | 30 | 6 | null | [
"task_categories:conversational",
"size_categories:1K<n<10K",
"language:en",
"sales",
"arxiv:2306.11644",
"region:us"
] | 2023-10-04T20:39:04Z | 2023-09-21T21:37:30.000Z | 2023-09-21T21:37:30 | ---
language:
- en
size_categories:
- 1K<n<10K
task_categories:
- conversational
dataset_info:
features:
- name: '0'
dtype: string
- name: '1'
dtype: string
- name: '2'
dtype: string
- name: '3'
dtype: string
- name: '4'
dtype: string
- name: '5'
dtype: string
- name: '6'
dtype: string
- name: '7'
dtype: string
- name: '8'
dtype: string
- name: '9'
dtype: string
- name: '10'
dtype: string
- name: '11'
dtype: string
- name: '12'
dtype: string
- name: '13'
dtype: string
- name: '14'
dtype: string
- name: '15'
dtype: string
- name: '16'
dtype: string
- name: '17'
dtype: string
- name: '18'
dtype: string
- name: '19'
dtype: string
splits:
- name: train
num_bytes: 6821725
num_examples: 3412
download_size: 2644154
dataset_size: 6821725
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
tags:
- sales
---
# Dataset Card for "sales-conversations"
This dataset was created for the purpose of training a sales agent chatbot that can convince people.
The initial idea came from: textbooks is all you need https://arxiv.org/abs/2306.11644
gpt-3.5-turbo was used for the generation
# Structure
The conversations have a customer and a salesman which appear always in changing order. customer, salesman, customer, salesman, etc.
The customer always starts the conversation
Who ends the conversation is not defined.
# Generation
Note that a textbook dataset is mandatory for this conversation generation. This examples rely on the following textbook dataset:
https://huggingface.co/datasets/goendalf666/sales-textbook_for_convincing_and_selling
The data generation code can be found here: https://github.com/tom813/salesGPT_foundation/blob/main/data_generation/textbook_and_conversation_gen.py
The following prompt was used to create a conversation
```
def create_random_prompt(chapter, roles=["Customer", "Salesman"], range_vals=(3, 7), industries=None):
if industries is None:
industries = ["tech", "health", "finance"] # default industries; replace with your default list if different
x = random.randint(*range_vals)
y = 0
for i in reversed(range(3, 9)): # Generalized loop for range of values
if i * x < 27:
y = i
break
conversation_structure = ""
for i in range(1, x+1):
conversation_structure += f"""
{roles[0]}: #{i}. sentence of {roles[0].lower()}
{roles[1]}: #{i}. sentence of {roles[1].lower()}"""
prompt = f"""Here is a chapter from a textbook about convincing people.
The purpose of this data is to use it to fine tune a llm.
Generate conversation examples that are based on the chapter that is provided and would help an ai to learn the topic by examples.
Focus only on the topic that is given in the chapter when generating the examples.
Let the example be in the {random.choice(industries)} industry.
Follow this structure and put each conversation in a list of objects in json format. Only return the json nothing more:
{conversation_structure}
Generate {y} lists of those conversations
Chapter:{chapter}"""
return prompt
```
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.13922268152236938,
-0.8602705597877502,
0.23747023940086365,
-0.07871650904417038,
-0.08777114748954773,
-0.285162091255188,
-0.1815413236618042,
-0.05408734083175659,
0.04647074639797211,
0.6352516412734985,
-0.7360265254974365,
-0.7434679269790649,
-0.06168172135949135,
-0.06111523509... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
charlie8522/Totto_testing | charlie8522 | 2023-09-24T14:42:33Z | 30 | 0 | null | [
"region:us"
] | 2023-09-24T14:42:33Z | 2023-09-22T07:11:31.000Z | 2023-09-22T07:11:31 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
patched-codes/static-analysis-eval | patched-codes | 2023-10-02T09:09:06Z | 30 | 1 | null | [
"region:us"
] | 2023-10-02T09:09:06Z | 2023-09-22T08:24:16.000Z | 2023-09-22T08:24:16 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: source
dtype: string
- name: file_name
dtype: string
- name: cwe
dtype: string
splits:
- name: train
num_bytes: 87854
num_examples: 76
download_size: 53832
dataset_size: 87854
---
# Dataset Card for "static-analysis-eval"
A dataset of 76 Python programs taken from real Python open source projects (top 1000 on GitHub),
where each program is a file that has exactly 1 vulnerability as detected by a particular static analyzer (Semgrep). | [
-0.548011302947998,
-0.7154862284660339,
-0.1178475171327591,
-0.19155356287956238,
-0.02088390290737152,
0.12335921823978424,
0.0777927041053772,
0.02573290839791298,
0.2050795704126358,
0.5696127414703369,
-0.5632398128509521,
-0.6048253774642944,
-0.1725340038537979,
0.01662898436188697... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
SEACrowd/liputan6 | SEACrowd | 2023-09-26T12:30:04Z | 30 | 0 | null | [
"language:ind",
"summarization",
"region:us"
] | 2023-09-26T12:30:04Z | 2023-09-26T11:11:20.000Z | 2023-09-26T11:11:20 | ---
tags:
- summarization
language:
- ind
---
# liputan6
A large-scale Indonesian summarization dataset consisting of harvested articles from Liputan6.com, an online news portal, resulting in 215,827 document-summary pairs.
## Dataset Usage
Run `pip install nusacrowd` before loading the dataset through HuggingFace's `load_dataset`.
## Citation
```
@inproceedings{koto2020liputan6,
title={Liputan6: A Large-scale Indonesian Dataset for Text Summarization},
author={Koto, Fajri and Lau, Jey Han and Baldwin, Timothy},
booktitle={Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing},
pages={598--608},
year={2020}
}
```
## License
CC-BY-SA 4.0
## Homepage
[https://github.com/fajri91/sum_liputan6](https://github.com/fajri91/sum_liputan6)
### NusaCatalogue
For easy indexing and metadata: [https://indonlp.github.io/nusa-catalogue](https://indonlp.github.io/nusa-catalogue) | [
-0.5077923536300659,
-0.3162391185760498,
-0.09752984344959259,
0.32688722014427185,
-0.5633718371391296,
-0.26248520612716675,
-0.18917858600616455,
-0.28105270862579346,
0.5355333685874939,
0.811295211315155,
-0.2273411601781845,
-0.7168759703636169,
-0.5831157565116882,
0.62613171339035... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
SEACrowd/keps | SEACrowd | 2023-09-26T12:34:57Z | 30 | 0 | null | [
"language:ind",
"keyword-extraction",
"region:us"
] | 2023-09-26T12:34:57Z | 2023-09-26T11:42:43.000Z | 2023-09-26T11:42:43 | ---
tags:
- keyword-extraction
language:
- ind
---
# keps
The KEPS dataset (Mahfuzh, Soleman and Purwarianti, 2019) consists of text from Twitter
discussing banking products and services and is written in the Indonesian language. A phrase
containing important information is considered a keyphrase. Text may contain one or more
keyphrases since important phrases can be located at different positions.
- tokens: a list of string features.
- seq_label: a list of classification labels, with possible values including O, B, I.
The labels use Inside-Outside-Beginning (IOB) tagging.
## Dataset Usage
Run `pip install nusacrowd` before loading the dataset through HuggingFace's `load_dataset`.
## Citation
```
@inproceedings{mahfuzh2019improving,
title={Improving Joint Layer RNN based Keyphrase Extraction by Using Syntactical Features},
author={Miftahul Mahfuzh, Sidik Soleman, and Ayu Purwarianti},
booktitle={Proceedings of the 2019 International Conference of Advanced Informatics: Concepts, Theory and Applications (ICAICTA)},
pages={1--6},
year={2019},
organization={IEEE}
}
```
## License
Creative Common Attribution Share-Alike 4.0 International
## Homepage
[https://github.com/IndoNLP/indonlu](https://github.com/IndoNLP/indonlu)
### NusaCatalogue
For easy indexing and metadata: [https://indonlp.github.io/nusa-catalogue](https://indonlp.github.io/nusa-catalogue) | [
-0.2187025398015976,
-0.5589922070503235,
0.19108639657497406,
0.28271278738975525,
-0.6232208609580994,
0.04058573395013809,
-0.053568173199892044,
-0.43730831146240234,
0.3344953656196594,
0.7016011476516724,
-0.13829508423805237,
-0.6901752352714539,
-0.7137216329574585,
0.3215746581554... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
PocketDoc/Choose-Your-Story-Long-Text-Adventures | PocketDoc | 2023-10-16T04:39:05Z | 30 | 7 | null | [
"task_categories:conversational",
"language:en",
"not-for-all-audiences",
"region:us"
] | 2023-10-16T04:39:05Z | 2023-10-07T20:04:56.000Z | 2023-10-07T20:04:56 | ---
tags:
- not-for-all-audiences
task_categories:
- conversational
language:
- en
pretty_name: Choose Your Story Novel Format Text Adventures
---
This is the 'CYS' text adventure dataset converted to a chat format with system messages. The system messages were randomly constructed from a table of phrases and templates. The original data can be found in the .7z archive.
**Credits:**
Thank you to VE Forbryderne from KoboldAI for scraping the dataset. | [
0.0239222701638937,
-0.3598152995109558,
0.2166820764541626,
0.2685265839099884,
-0.4350494146347046,
-0.002117002382874489,
-0.20481818914413452,
-0.1876736581325531,
0.8717247247695923,
1.187821626663208,
-1.1793895959854126,
-0.550029993057251,
-0.30313271284103394,
0.24251627922058105,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
slaqrichi/Cosmic_dataset_V3 | slaqrichi | 2023-10-11T16:12:38Z | 30 | 0 | null | [
"region:us"
] | 2023-10-11T16:12:38Z | 2023-10-11T16:12:30.000Z | 2023-10-11T16:12:30 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: valid
path: data/valid-*
dataset_info:
features:
- name: ID
dtype: int64
- name: requirement
dtype: string
- name: functional process
dtype: string
- name: functional user
dtype: string
- name: sub processes
dtype: string
- name: data groups
dtype: string
splits:
- name: train
num_bytes: 22158.157894736843
num_examples: 17
- name: test
num_bytes: 1303.421052631579
num_examples: 1
- name: valid
num_bytes: 1303.421052631579
num_examples: 1
download_size: 43297
dataset_size: 24765.000000000004
---
# Dataset Card for "Cosmic_dataset_V3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5491254329681396,
-0.19610652327537537,
0.45100051164627075,
0.2948274612426758,
-0.2878789007663727,
-0.13571549952030182,
0.4850439131259918,
-0.3045528531074524,
0.8016279935836792,
0.6658798456192017,
-0.9627867937088013,
-0.7247605323791504,
-0.6049519777297974,
-0.2058330774307251... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
alexrs/alpaca-cleaned-15-clusters | alexrs | 2023-10-16T14:43:05Z | 30 | 0 | null | [
"region:us"
] | 2023-10-16T14:43:05Z | 2023-10-16T14:43:02.000Z | 2023-10-16T14:43:02 | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: output
dtype: string
- name: input
dtype: string
- name: cluster
dtype: int32
splits:
- name: train
num_bytes: 40490946
num_examples: 51760
download_size: 24185910
dataset_size: 40490946
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "alpaca-cleaned-15-clusters"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.8423226475715637,
-0.39221012592315674,
0.3352046310901642,
0.30604320764541626,
-0.3381996750831604,
-0.04275139421224594,
0.25068411231040955,
-0.31079965829849243,
1.0414403676986694,
0.5492473840713501,
-0.9289328455924988,
-0.9364821910858154,
-0.5461785793304443,
-0.13573649525642... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
expertai/BUSTER | expertai | 2023-11-24T11:30:54Z | 30 | 2 | null | [
"task_categories:token-classification",
"size_categories:10K<n<100K",
"language:en",
"license:apache-2.0",
"finance",
"region:us"
] | 2023-11-24T11:30:54Z | 2023-10-18T13:03:49.000Z | 2023-10-18T13:03:49 | ---
language:
- en
license: apache-2.0
size_categories:
- 10K<n<100K
task_categories:
- token-classification
pretty_name: buster
tags:
- finance
configs:
- config_name: default
data_files:
- split: FOLD_1
path: data/FOLD_1-*
- split: FOLD_2
path: data/FOLD_2-*
- split: FOLD_3
path: data/FOLD_3-*
- split: FOLD_4
path: data/FOLD_4-*
- split: FOLD_5
path: data/FOLD_5-*
- split: SILVER
path: data/SILVER-*
dataset_info:
features:
- name: document_id
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: labels
sequence: string
splits:
- name: FOLD_1
num_bytes: 13597946
num_examples: 753
- name: FOLD_2
num_bytes: 13477878
num_examples: 759
- name: FOLD_3
num_bytes: 13602552
num_examples: 758
- name: FOLD_4
num_bytes: 13834760
num_examples: 755
- name: FOLD_5
num_bytes: 13632431
num_examples: 754
- name: SILVER
num_bytes: 108914416
num_examples: 6196
download_size: 47115333
dataset_size: 177059983
---
# Dataset Card for BUSTER
BUSiness Transaction Entity Recognition dataset.
BUSTER is an Entity Recognition (ER) benchmark for entities related to business transactions. It consists of a gold corpus of
3779 manually annotated documents on financial transactions that were randomly divided into 5 folds,
plus an additional silver corpus of 6196 automatically annotated documents that were created by the model-optimized RoBERTa system. | [
-0.7357842922210693,
-0.850250780582428,
0.11749984323978424,
-0.13509677350521088,
-0.2396170049905777,
-0.03267504647374153,
0.37860721349716187,
-0.7946071624755859,
0.5473054051399231,
0.747138500213623,
-0.3496379256248474,
-0.2523561716079712,
-0.5291677713394165,
0.3332142233848572,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
123rc/medical_text | 123rc | 2023-10-19T12:38:57Z | 30 | 0 | null | [
"task_categories:text-classification",
"size_categories:10K<n<100K",
"license:apache-2.0",
"medical",
"region:us"
] | 2023-10-19T12:38:57Z | 2023-10-19T12:33:21.000Z | 2023-10-19T12:33:21 | ---
license: apache-2.0
task_categories:
- text-classification
tags:
- medical
size_categories:
- 10K<n<100K
--- | [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ChristophSchuhmann/yt-urls-for-emotional-tts | ChristophSchuhmann | 2023-10-21T07:46:42Z | 30 | 0 | null | [
"region:us"
] | 2023-10-21T07:46:42Z | 2023-10-21T07:45:25.000Z | 2023-10-21T07:45:25 | Entry not found | [
-0.32276487350463867,
-0.22568444907665253,
0.8622263073921204,
0.43461570143699646,
-0.5282988548278809,
0.7012969255447388,
0.7915717363357544,
0.07618642598390579,
0.7746027112007141,
0.25632190704345703,
-0.7852815389633179,
-0.22573848068714142,
-0.910447895526886,
0.5715675354003906,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
llmware/rag_instruct_test_dataset2_financial_0.1 | llmware | 2023-10-23T15:01:44Z | 30 | 4 | null | [
"license:apache-2.0",
"finance",
"retrieval augmented generation",
"RAG",
"region:us"
] | 2023-10-23T15:01:44Z | 2023-10-22T15:19:52.000Z | 2023-10-22T15:19:52 | ---
license: apache-2.0
tags:
- finance
- retrieval augmented generation
- RAG
pretty_name: RAG Instruct Test Dataset 2 - Financial - v0.1
---
# Dataset Card for RAG-Instruct-Financial-Test-Dataset
### Dataset Summary
This is a test dataset for "retrieval augmented generation" (RAG) use cases, especially for financial data extraction and analysis, including a series of questions relating to tabular financial data and common-sense math operations (small increments, decrements, sorting and ordering as well as recognizing when information is not included in a particular source). This test dataset includes 100 samples with context passages pulled from common 'retrieval scenarios' in financial markets, including financial earnings releases, stock market updates, financial tables and financial news. The primary use case is to evaluate the effectiveness of an
instruct-fine-tuned LLM used in conjunction with closed-context, fact-based question-answering, key-value extraction, and summarization with bulletpoints. The context passages are relatively short in this test-set ranging from ~100 tokens to ~500 tokens, and was designed for use with the
BLING series of models but is suitable for comparison evaluations of any LLM for basic RAG scenarios.
This is part of a series of RAG-Instruct test datasets from llmware.
### Languages
English
## Dataset Structure
100 JSONL samples with 4 keys - "query" | "context" | "answer" | "sample_number"
### Personal and Sensitive Information
The dataset samples were written bespoke for this objective, but do rely upon some public information, including major public figures and widely reported events.
Any other names were created/masked and any overlap with real companies or people is coincidental.
## Dataset Card Contact
Darren Oberst & llmware team
Please reach out anytime if you are interested in this project and would like to participate and work with us!
| [
-0.19482068717479706,
-0.6377229690551758,
-0.11408121883869171,
0.31651970744132996,
-0.32094573974609375,
0.15527687966823578,
0.002720009768381715,
-0.24276863038539886,
0.043062031269073486,
0.7043034434318542,
-0.39955058693885803,
-0.4361782371997833,
-0.25043487548828125,
0.05308334... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Lornng/cpgQA-textcol-splitted | Lornng | 2023-10-24T10:33:46Z | 30 | 0 | null | [
"region:us"
] | 2023-10-24T10:33:46Z | 2023-10-23T17:19:25.000Z | 2023-10-23T17:19:25 | Entry not found | [
-0.32276487350463867,
-0.22568444907665253,
0.8622263073921204,
0.43461570143699646,
-0.5282988548278809,
0.7012969255447388,
0.7915717363357544,
0.07618642598390579,
0.7746027112007141,
0.25632190704345703,
-0.7852815389633179,
-0.22573848068714142,
-0.910447895526886,
0.5715675354003906,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
deven367/babylm-10M | deven367 | 2023-10-27T17:40:10Z | 30 | 0 | null | [
"region:us"
] | 2023-10-27T17:40:10Z | 2023-10-27T17:40:02.000Z | 2023-10-27T17:40:02 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: valid
path: data/valid-*
- split: test
path: data/test-*
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 57630095
num_examples: 1015521
- name: valid
num_bytes: 54930583
num_examples: 986022
- name: test
num_bytes: 59992087
num_examples: 1008854
download_size: 108516100
dataset_size: 172552765
---
# Dataset Card for "babylm-10M"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5785196423530579,
-0.19051091372966766,
0.003213032614439726,
0.39524903893470764,
-0.3714827299118042,
-0.01856352388858795,
0.3371925950050354,
-0.1680866926908493,
0.7085327506065369,
0.5375604033470154,
-0.8814164400100708,
-0.6724643111228943,
-0.6752158403396606,
-0.31859299540519... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
bobbybelajar/AmazonGrouped | bobbybelajar | 2023-10-29T07:39:25Z | 30 | 0 | null | [
"region:us"
] | 2023-10-29T07:39:25Z | 2023-10-29T07:06:42.000Z | 2023-10-29T07:06:42 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
DemonMaike/psy | DemonMaike | 2023-10-29T19:29:50Z | 30 | 0 | null | [
"region:us"
] | 2023-10-29T19:29:50Z | 2023-10-29T19:28:50.000Z | 2023-10-29T19:28:50 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 17347047.535353534
num_examples: 445
- name: test
num_bytes: 1949106.4646464647
num_examples: 50
download_size: 8873475
dataset_size: 19296154.0
---
# Dataset Card for "psy"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.47129929065704346,
-0.031844690442085266,
0.3958442211151123,
0.36492642760276794,
-0.2521548867225647,
0.045575886964797974,
0.23268446326255798,
-0.10673250257968903,
1.0193994045257568,
0.5016872882843018,
-1.017814040184021,
-0.6195687651634216,
-0.65521639585495,
-0.097208477556705... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
aminlouhichi/donut5 | aminlouhichi | 2023-10-30T12:55:34Z | 30 | 0 | null | [
"region:us"
] | 2023-10-30T12:55:34Z | 2023-10-30T12:55:16.000Z | 2023-10-30T12:55:16 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: image
dtype: image
- name: ground_truth
dtype: string
splits:
- name: train
num_bytes: 12953017.0
num_examples: 60
- name: validation
num_bytes: 12953017.0
num_examples: 60
- name: test
num_bytes: 25755968.0
num_examples: 60
download_size: 41314952
dataset_size: 51662002.0
---
# Dataset Card for "donut5"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.44393911957740784,
-0.0683620274066925,
0.3237069547176361,
0.09609446674585342,
0.012938185594975948,
0.1454872488975525,
0.21933184564113617,
-0.13154201209545135,
0.7515149712562561,
0.5042569637298584,
-0.8520952463150024,
-0.7919110655784607,
-0.612139105796814,
-0.1381774693727493... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Skywork/ChineseDomainModelingEval | Skywork | 2023-11-02T03:51:43Z | 30 | 8 | null | [
"license:other",
"arxiv:2310.19341",
"region:us"
] | 2023-11-02T03:51:43Z | 2023-11-01T04:35:36.000Z | 2023-11-01T04:35:36 | ---
license: other
license_name: license
license_link: >-
https://github.com/SkyworkAI/Skywork/blob/main/Skywork%20Community%20License.pdf
---
# 数据介绍(Introduction)
Skywork/ChineseDomainModelingEval是中文领域建模能力评测数据集,我们对多个领域筛选出2023年9月份-2023年10月份新发布的几百到上千篇高质量文章,并人工进行了核对。测试数据的来源也足够广泛,质量也高。我们可以选取当前最新的文章评测不同模型的Perplexity,模型很难作弊。并且我们会持续按照最新数据评测各个模型效果,动态更新各个模型能力。
# 文件介绍(File Introduction)
- zh_finance.jsonl为金融领域评估数据
- zh_game.jsonl为游戏领域评估数据
- zh_government.jsonl为政务领域评估数据
- zh_movie.jsonl为电影领域评估数据
- zh_tech.jsonl为技术领域评估数据
- zh_general.jsonl为综合领域评估数据
# 协议(License Agreement)
The community usage of SkyPile dataset requires Skywork Community License. The SkyPile dataset supports commercial use. If you plan to use the Skywork model or its derivatives for commercial purposes, you must abide by terms and conditions within Skywork Community License as well as Apache2.0.
# 引用(Contact Us and Citation)
If you find our work helpful, please feel free to cite our paper~
```
@misc{wei2023skywork,
title={Skywork: A More Open Bilingual Foundation Model},
author={Tianwen Wei and Liang Zhao and Lichang Zhang and Bo Zhu and Lijie Wang and Haihua Yang and Biye Li and Cheng Cheng and Weiwei Lü and Rui Hu and Chenxia Li and Liu Yang and Xilin Luo and Xuejie Wu and Lunan Liu and Wenjun Cheng and Peng Cheng and Jianhao Zhang and Xiaoyu Zhang and Lei Lin and Xiaokun Wang and Yutuan Ma and Chuanhai Dong and Yanqi Sun and Yifu Chen and Yongyi Peng and Xiaojuan Liang and Shuicheng Yan and Han Fang and Yahui Zhou},
year={2023},
eprint={2310.19341},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | [
-0.22318311035633087,
-0.5374863147735596,
-0.08037888258695602,
0.57293301820755,
-0.37955769896507263,
-0.27320656180381775,
-0.01645498536527157,
-0.41538143157958984,
-0.019333872944116592,
0.5878645777702332,
-0.44429609179496765,
-0.45017555356025696,
-0.6634376645088196,
0.054954238... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Yeshwanth-03-06-2004/twitter_bios | Yeshwanth-03-06-2004 | 2023-11-02T10:59:59Z | 30 | 0 | null | [
"region:us"
] | 2023-11-02T10:59:59Z | 2023-11-01T07:01:17.000Z | 2023-11-01T07:01:17 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
re2panda/click_bate_1000_final | re2panda | 2023-11-03T07:31:57Z | 30 | 0 | null | [
"region:us"
] | 2023-11-03T07:31:57Z | 2023-11-03T07:08:04.000Z | 2023-11-03T07:08:04 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: output
dtype: string
- name: input
dtype: string
- name: instruction
dtype: string
splits:
- name: train
num_bytes: 35358262.8
num_examples: 11400
- name: test
num_bytes: 1860961.2
num_examples: 600
download_size: 20825026
dataset_size: 37219224.0
---
# Dataset Card for "click_bate_1000_final"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.718902051448822,
-0.19365279376506805,
0.2987520098686218,
0.16239893436431885,
-0.08327426761388779,
-0.009766966104507446,
0.403550386428833,
0.04540630057454109,
0.9570578932762146,
0.6517660021781921,
-0.850919783115387,
-0.8708499670028687,
-0.4097616970539093,
-0.09703986346721649... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
maywell/ko_hh-rlhf-20k_filtered | maywell | 2023-11-04T18:45:29Z | 30 | 2 | null | [
"region:us"
] | 2023-11-04T18:45:29Z | 2023-11-04T03:38:26.000Z | 2023-11-04T03:38:26 | ---
dataset_info:
features:
- name: chosen
dtype: string
- name: rejected
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 30828302
num_examples: 19363
download_size: 15034439
dataset_size: 30828302
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "ko_hh-rlhf-20k_filtered"
Synatra-Translation 모델로 번역된 20k rlhf셋입니다. 번역퀄이 뛰어나진 않습니다. 추가 대화문 등의 데이터 학습이 필요해보입니다.
## 베이스 데이터셋
[Anthropic/hh-rlhf](https://huggingface.co/datasets/Anthropic/hh-rlhf) | [
-0.4540056586265564,
-0.7774622440338135,
0.16736744344234467,
0.39703094959259033,
-1.0868910551071167,
0.1048475056886673,
0.02523004449903965,
-0.5539982318878174,
0.9119052290916443,
0.7804825901985168,
-0.899985134601593,
-0.9910724759101868,
-0.6659504175186157,
0.38888993859291077,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
nathanReitinger/mlcb-lite | nathanReitinger | 2023-11-05T02:38:26Z | 30 | 0 | null | [
"region:us"
] | 2023-11-05T02:38:26Z | 2023-11-05T02:21:49.000Z | 2023-11-05T02:21:49 | ---
dataset_info:
features:
- name: label
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 530086607
num_examples: 4681
- name: test
num_bytes: 60210047
num_examples: 521
download_size: 224023623
dataset_size: 590296654
---
# Dataset Card for "mlcb-lite"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.666680097579956,
-0.3822020888328552,
0.16143058240413666,
0.25390538573265076,
-0.3143365681171417,
-0.05670790374279022,
0.18034224212169647,
-0.07468189299106598,
0.926204264163971,
0.5887891054153442,
-0.8653289079666138,
-0.6977936625480652,
-0.34194886684417725,
-0.333363205194473... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
erikaxenia/id_card_class_da | erikaxenia | 2023-11-11T18:38:35Z | 30 | 0 | null | [
"region:us"
] | 2023-11-11T18:38:35Z | 2023-11-11T18:38:17.000Z | 2023-11-11T18:38:17 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: valid
path: data/valid-*
- split: test
path: data/test-*
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype: int64
- name: ground_truth
dtype: string
splits:
- name: train
num_bytes: 250255462.45
num_examples: 2350
- name: valid
num_bytes: 21179110.0
num_examples: 59
- name: test
num_bytes: 16112586.0
num_examples: 58
download_size: 248519862
dataset_size: 287547158.45
---
# Dataset Card for "id_card_class_da"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6397864818572998,
-0.43422266840934753,
0.2048068791627884,
0.0070976922288537025,
-0.12989497184753418,
0.14816051721572876,
0.47222238779067993,
-0.08186466246843338,
0.8704288005828857,
0.2601599097251892,
-0.6491749882698059,
-0.8460555076599121,
-0.5500683784484863,
-0.227372869849... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
jxie/year | jxie | 2023-11-14T01:36:57Z | 30 | 0 | null | [
"region:us"
] | 2023-11-14T01:36:57Z | 2023-11-14T01:36:27.000Z | 2023-11-14T01:36:27 | ---
dataset_info:
features:
- name: inputs
sequence: float64
- name: label
dtype: float64
splits:
- name: train
num_bytes: 271551504
num_examples: 370972
- name: val
num_bytes: 67887876
num_examples: 92743
- name: test
num_bytes: 37793160
num_examples: 51630
download_size: 385557206
dataset_size: 377232540
---
# Dataset Card for "year"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6434473395347595,
-0.20324589312076569,
0.20764026045799255,
0.22230835258960724,
-0.2695602774620056,
-0.23163604736328125,
0.442768931388855,
-0.3825928568840027,
0.9126972556114197,
0.4641422927379608,
-1.0090041160583496,
-0.6103195548057556,
-0.4399523138999939,
-0.1601547300815582... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
vanesa1221/llama-two-unsaac | vanesa1221 | 2023-11-18T17:22:02Z | 30 | 0 | null | [
"region:us"
] | 2023-11-18T17:22:02Z | 2023-11-17T16:28:59.000Z | 2023-11-17T16:28:59 | ---
dataset_info:
features:
- name: data
struct:
- name: text
dtype: string
splits:
- name: train
num_bytes: 1358
num_examples: 2
download_size: 5753
dataset_size: 1358
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
| [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
kopyl/833-icons-dataset-1024-blip-large | kopyl | 2023-11-17T19:35:17Z | 30 | 0 | null | [
"region:us"
] | 2023-11-17T19:35:17Z | 2023-11-17T19:34:14.000Z | 2023-11-17T19:34:14 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 21063249.0
num_examples: 833
download_size: 19766635
dataset_size: 21063249.0
---
# Dataset Card for "833-icons-dataset-1024-blip-large"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6281841397285461,
-0.07778698951005936,
0.21699003875255585,
0.5046412944793701,
-0.2311244010925293,
0.10076868534088135,
0.10398711264133453,
-0.4420091509819031,
1.0859441757202148,
0.6228162050247192,
-0.6419630646705627,
-0.6129369139671326,
-0.624366044998169,
0.1938590407371521,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
umarbutler/open-australian-legal-qa | umarbutler | 2023-11-20T01:07:50Z | 30 | 1 | null | [
"task_categories:question-answering",
"task_categories:text-generation",
"task_categories:text2text-generation",
"task_ids:closed-domain-qa",
"annotations_creators:machine-generated",
"language_creators:machine-generated",
"size_categories:1K<n<10K",
"source_datasets:umarbutler/open-australian-legal-c... | 2023-11-20T01:07:50Z | 2023-11-18T10:35:19.000Z | 2023-11-18T10:35:19 | ---
language:
- en
license: other
license_name: open-australian-legal-corpus
license_link: https://huggingface.co/datasets/umarbutler/open-australian-legal-corpus/blob/main/LICENCE.md
tags:
- law
- legal
- australia
- question-answering
- qa
- question-answer
- text-generation
- llm
- chatbot
- conversational-ai
- generative-ai
- natural-language-understanding
- fine-tuning
annotations_creators:
- machine-generated
language_creators:
- machine-generated
language_details: en-AU, en-GB
pretty_name: Open Australian Legal QA
size_categories:
- 1K<n<10K
source_datasets:
- umarbutler/open-australian-legal-corpus
task_categories:
- question-answering
- text-generation
- text2text-generation
task_ids:
- closed-domain-qa
viewer: true
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: prompt
dtype: string
- name: source
struct:
- name: version_id
dtype: string
- name: type
dtype: string
- name: jurisdiction
dtype: string
- name: source
dtype: string
- name: citation
dtype: string
- name: url
dtype: string
- name: text
dtype: string
config_name: train
splits:
- name: train
num_bytes: 11707756
num_examples: 2124
download_size: 11986365
dataset_size: 11707756
---
<!-- To update the above `dataset_info` section, please run the following command: `datasets-cli test open_australian_legal_qa.py --save_info --all_configs`. -->
# **Open Australian Legal QA ⚖️**
<a href="https://huggingface.co/datasets/umarbutler/open-australian-legal-qa" alt="Release"><img src="https://img.shields.io/badge/release-v1.0.0-green"></a>
Open Australian Legal QA is the first open dataset of Australian legal questions and answers.
Comprised of 2,124 questions and answers synthesised by `gpt-4` from the [Open Australian Legal Corpus](https://huggingface.co/datasets/umarbutler/open-australian-legal-corpus), the largest open database of Australian law, the dataset is intended to facilitate the development of legal AI assistants in Australia.
To ensure its accessibility to as wide an audience as possible, the dataset is distributed under the same licence as the [Open Australian Legal Corpus](https://huggingface.co/datasets/umarbutler/open-australian-legal-corpus/blob/main/LICENCE.md).
## Usage 👩💻
The below code snippet illustrates how the dataset may be loaded with the [Hugging Face Datasets](https://huggingface.co/docs/datasets/index) Python library:
```python
from datasets import load_dataset
corpus = load_dataset('umarbutler/open_australian_legal_qa', split='train')
```
To speed up the loading of the dataset, you may wish to install [`orjson`](https://github.com/ijl/orjson).
## Structure 🗂️
The dataset is stored in [qa.jsonl](https://huggingface.co/datasets/umarbutler/open-australian-legal-qa/blob/main/qa.jsonl), a json lines file where each line represents a question-answer pair consisting of four keys:
| Key | Description |
| --- | --- |
| question | The text of the question. |
| answer | The text of the answer to the question. |
| prompt | The text of the prompt used to generate the question-answer pair. |
| source | A dictionary representing the document from which the question-answer pair was synthesised, sharing the same keys as documents in the [Open Australian Legal Corpus](https://huggingface.co/datasets/umarbutler/open-australian-legal-corpus), with the `text` field constituting the text of the chunk used to generate the pair. |
## Methodology 🧪
2,124 documents from the [Open Australian Legal Corpus](https://huggingface.co/datasets/umarbutler/open-australian-legal-corpus) were randomly sampled, barring bills and documents consisting entirely of whitespace. These documents were then split into semantically meaningful chunks up to 384-tokens-long (as determined by [`tiktoken`](https://github.com/openai/tiktoken)'s tokeniser for `gpt-4`) with the [`semchunk`](https://github.com/umarbutler/semchunk) Python library.
Chunks that consisted entirely of whitespace, contained 6 or more consecutive periods, ignoring whitespace (indicating that they contained a table of contents) or that were less than 96-tokens-long were discarded. A single chunk was randomly selected from each document (for those documents with a chunk to select) and subsequently cleaned of consecutive newlines, consecutive whitespace and lines consisting entirely of whitespace.
These chunks were then embedded into the following prompt, with the names of jurisdictions and types being capitalised and stripped of hyphens:
```xml
# Snippet
The snippet from an Australian legal document from which you must synthesise a question and answer is provided below.
<document_metadata>
<document_title><!-- insert citation here --></document_title>
<document_jurisdiction><!-- insert jurisdiction here --></document_jurisdiction>
<document_type><!-- insert type here --></document_type>
</document_metadata>
<snippet>
<!-- insert text here -->
</snippet>
# Format
You must format your response as follows:
<format>
# Question
{A question related to the snippet, or a topic discussed therein.}
# Answer
{The answer to the question, extracted from the snippet.}
</format>
# Instructions
You must act as a question-and-answer synthesiser that takes a snippet from an Australian legal document and synthesises a question related to the snippet, or a topic discussed therein, and an answer to that question, extracted from the snippet.
Your question must be decontextualised and standalone from the snippet. If the question pertains to a particular jurisdiction or document, it must state that explicitly (eg, 'In Victoria, is it lawful for ...?', 'What did the Court decide in Mabo v Queensland (No 2) [1992] HCA 23?', etc...).
Your answer must also be decontextualised and standalone from the snippet. It must reference the document from which it came (eg, 'Under the Crimes Act 1958 (Vic), ...', 'In Mabo v Queensland (No 2) [1992] HCA 23, the Court decided ...', etc...), not the snippet itself. It must be capable of being understood on its own and without reference to the snippet or its source document.
When referring to a document (eg, the Crimes Act) or a part thereof (eg, Paragraph 1), or to a person (eg, the Minister), organisation (eg, the Department) or concept (eg, the rule of law), you must refer to it by its full name (eg, the Crimes Act 1958 (Vic) instead of the Crimes Act, Paragraph 1 of ABC v XYZ instead of Paragraph 1, the Commonwealth Minister for Finance instead of the Minister).
If it is not possible to synthesise a question and answer from the snippet, you must respond with `<!no_qa!>`. Otherwise, your response must conform to the provided format.
```
The resulting prompts were then sent to `gpt-4` with the following hyperparameters:
| Hyperparameter | Value |
| --- | --- |
| `temperature` | 0 |
| `top_p` | 1 |
| `frequency_penalty` | 0 |
| `presence_penalty` | 0 |
| `max_tokens` | 768 |
`gpt-4`'s responses were parsed with the regex pattern `#\s?Question:?\s+((?:\n|.)+)#\s?Answer:?\s+((?:\n|.)+)`, yielding the question-answer pairs. Any malformed responses were discarded.
## Changelog 🔄
All notable changes to the dataset are documented in its [Changelog 🔄](https://huggingface.co/datasets/umarbutler/open-australian-legal-qa/blob/main/CHANGELOG.md).
This project adheres to [Keep a Changelog](https://keepachangelog.com/en/1.0.0/) and [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## Licence 📜
The dataset is distributed under the same licence as the [Open Australian Legal Corpus](https://huggingface.co/datasets/umarbutler/open-australian-legal-corpus/blob/main/LICENCE.md).
## Citation 🔖
If you've relied on the dataset for your work, please cite:
```latex
@misc{butler-2023-open-australian-legal-dataset,
author = {Butler, Umar},
year = {2023},
title = {Open Australian Legal QA},
publisher = {Hugging Face},
version = {1.0.0},
doi = {10.57967/hf/1361},
url = {https://huggingface.co/datasets/umarbutler/open-australian-legal-qa}
}
```
## Acknowledgements 🙏
In the spirit of reconciliation, the author acknowledges the Traditional Custodians of Country throughout Australia and their connections to land, sea and community. He pays his respect to their Elders past and present and extends that respect to all Aboriginal and Torres Strait Islander peoples today.
The author thanks Matthew Altenberg, who gave him the idea of using `gpt-4` to synthesise questions and answers from the [Open Australian Legal Corpus](https://huggingface.co/datasets/umarbutler/open-australian-legal-corpus).
The author also acknowledges the creators of the many Python libraries relied upon in the creation of the dataset.
Finally, the author is eternally grateful for the endless support of his wife and her willingness to put up with many a late night spent writing code and quashing bugs. | [
-0.3934803903102875,
-0.8957387804985046,
0.5341398119926453,
0.03806672990322113,
-0.32920005917549133,
-0.3896142840385437,
-0.22350579500198364,
-0.222326397895813,
0.3677787184715271,
0.7718373537063599,
-0.4932955801486969,
-0.5301896333694458,
-0.345818430185318,
0.49688687920570374,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
PKU-Alignment/PKU-SafeRLHF-30K | PKU-Alignment | 2023-11-20T10:23:37Z | 30 | 0 | null | [
"task_categories:text-generation",
"size_categories:10K<n<100K",
"language:en",
"license:cc-by-nc-4.0",
"safe",
"safety",
"ai-safety",
"llm",
"lm",
"human-feedback",
"rlhf",
"safe-rlhf",
"arxiv:2307.04657",
"region:us"
] | 2023-11-20T10:23:37Z | 2023-11-20T10:20:10.000Z | 2023-11-20T10:20:10 | ---
license: cc-by-nc-4.0
task_categories:
- text-generation
language:
- en
tags:
- safe
- safety
- ai-safety
- llm
- lm
- human-feedback
- rlhf
- safe-rlhf
size_categories:
- 10K<n<100K
---
# Dataset Card for PKU-SafeRLHF
<span style="color: red;">Warning: this dataset contains data that may be offensive or harmful. The data are intended for research purposes, especially research that can make models less harmful. The views expressed in the data do not reflect the views of PKU-Alignment Team or any of its members. </span>
## Dataset Summary
The preference dataset consists of 30k+ expert comparison data. Each entry in this dataset includes two responses to a question, along with safety meta-labels and preferences for both responses, taking into consideration their helpfulness and harmlessness.
The annotation pipeline for this dataset is depicted in the following image:

## Human-Preference on Harmlessness and Helpfulness
### Harmlessness of a QA Pair
The harmlessness of a QA pair is evaluated based on ethical considerations and safety implications inherent in the interaction.
A QA pair is deemed harmless when it is classified as risk-neutral across all 14 harm categories outlined in our paper.
By risk-neutral, we mean that the QA pair does not engender or promote any harmful consequences or risks as per the definitions in these categories.
Thus, a risk-neutral QA pair neither incite harm nor leads to unsafe outcomes, effectively aligning with our safety and ethical guidelines.
### Helpfulness of a Response
The helpfulness of a response pertains to how effectively it addresses a given prompt. This measure is independent of the harmlessness of the response, as it focuses solely on the quality, clarity, and relevance of the provided information. Consequently, the helpfulness judgment can be distinctly different from the harmlessness judgment. For instance, consider a situation where a user asks about the procedure to synthesize methamphetamine. In such a case, a detailed, step-by-step response would be considered helpful due to its accuracy and thoroughness. However, due to the harmful implications of manufacturing illicit substances, this QA pair would be classified as extremely harmful.
### Ranking of Responses
Once the helpfulness and harmlessness of responses are evaluated, they are ranked accordingly. It is important to note that this is a two-dimensional ranking: responses are ranked separately for helpfulness and harmlessness. This is due to the distinctive and independent nature of these two attributes. The resulting rankings provide a nuanced perspective on the responses, allowing us to balance information quality with safety and ethical considerations. These separate rankings of helpfulness and harmlessness contribute to a more comprehensive understanding of LLM outputs, particularly in the context of safety alignment. We have enforced a logical order to ensure the correctness of the harmlessness ranking: harmless responses (i.e. all 14 harm categories risk-neutral) are always ranked higher than harmful ones (i.e., at least 1 category risky).
## Usage
To load our dataset, use the `load_dataset()` function as follows:
```python
from datasets import load_dataset
dataset = load_dataset("PKU-Alignment/PKU-SafeRLHF-30K")
```
## Paper
You can find more information in our paper
- **Dataset Paper:** <https://arxiv.org/abs/2307.04657>
## Contact
The original authors host this dataset on GitHub here: https://github.com/PKU-Alignment/beavertails.
| [
-0.13230764865875244,
-0.6083540916442871,
0.17888174951076508,
0.004348084796220064,
-0.25636276602745056,
-0.2693689465522766,
0.13767275214195251,
-0.7170901894569397,
0.19374904036521912,
0.32235196232795715,
-0.16538846492767334,
-0.636422872543335,
-0.5092487335205078,
0.002234710380... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
rkdeva/Dermnet-Test-1 | rkdeva | 2023-11-20T18:32:38Z | 30 | 0 | null | [
"region:us"
] | 2023-11-20T18:32:38Z | 2023-11-20T18:30:25.000Z | 2023-11-20T18:30:25 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype: string
splits:
- name: train
num_bytes: 376769298.178
num_examples: 3937
download_size: 370140973
dataset_size: 376769298.178
---
# Dataset Card for "Dermnet-Test-1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.7127817869186401,
-0.40642595291137695,
-0.05925039201974869,
0.10694648325443268,
-0.25308942794799805,
-0.18408513069152832,
0.4677068889141083,
0.030737895518541336,
0.9208075404167175,
0.4970873296260834,
-1.0470659732818604,
-0.860258936882019,
-0.5532301068305969,
-0.1994474232196... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
pnadel/iliad_odyssey_aligned | pnadel | 2023-11-20T18:38:44Z | 30 | 0 | null | [
"region:us"
] | 2023-11-20T18:38:44Z | 2023-11-20T18:38:42.000Z | 2023-11-20T18:38:42 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: sentid
dtype: string
- name: cit
dtype: string
- name: Eng
dtype: string
- name: Gk
dtype: string
- name: Lems
dtype: string
splits:
- name: train
num_bytes: 5475867.200602134
num_examples: 12223
- name: test
num_bytes: 1369078.7993978662
num_examples: 3056
download_size: 3953094
dataset_size: 6844946.0
---
# Dataset Card for "iliad_odyssey_aligned"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6093882918357849,
-0.13814879953861237,
0.2401161789894104,
-0.0783526599407196,
-0.28875941038131714,
-0.01792391948401928,
0.3243548274040222,
-0.06111318990588188,
0.9523340463638306,
0.46338507533073425,
-0.7776569128036499,
-0.8896013498306274,
-0.5221570730209351,
-0.2489781081676... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
aladaf/homo-silicus-unboxing | aladaf | 2023-11-20T19:44:05Z | 30 | 0 | null | [
"task_categories:conversational",
"size_categories:10K<n<100K",
"language:en",
"license:apache-2.0",
"region:us"
] | 2023-11-20T19:44:05Z | 2023-11-20T19:36:18.000Z | 2023-11-20T19:36:18 | ---
license: apache-2.0
task_categories:
- conversational
language:
- en
pretty_name: unboxing
size_categories:
- 10K<n<100K
--- | [
-0.12853369116783142,
-0.18616779148578644,
0.6529126167297363,
0.49436280131340027,
-0.193193256855011,
0.2360745668411255,
0.36071979999542236,
0.05056314915418625,
0.5793651342391968,
0.740013837814331,
-0.6508103013038635,
-0.23783960938453674,
-0.7102248668670654,
-0.04782580211758613... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
NepaliAI/Health-Nepali | NepaliAI | 2023-11-28T14:24:01Z | 30 | 1 | null | [
"license:apache-2.0",
"region:us"
] | 2023-11-28T14:24:01Z | 2023-11-21T11:56:11.000Z | 2023-11-21T11:56:11 | ---
license: apache-2.0
---
| [
-0.12853369116783142,
-0.18616779148578644,
0.6529126167297363,
0.49436280131340027,
-0.193193256855011,
0.2360745668411255,
0.36071979999542236,
0.05056314915418625,
0.5793651342391968,
0.740013837814331,
-0.6508103013038635,
-0.23783960938453674,
-0.7102248668670654,
-0.04782580211758613... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Yeobin/ShareGPT | Yeobin | 2023-11-21T15:13:42Z | 30 | 0 | null | [
"region:us"
] | 2023-11-21T15:13:42Z | 2023-11-21T15:13:09.000Z | 2023-11-21T15:13:09 | ---
dataset_info:
features:
- name: Q
dtype: string
- name: A
dtype: string
splits:
- name: train
num_bytes: 1007857827
num_examples: 608556
download_size: 244005649
dataset_size: 1007857827
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
| [
-0.12853369116783142,
-0.18616779148578644,
0.6529126167297363,
0.49436280131340027,
-0.193193256855011,
0.2360745668411255,
0.36071979999542236,
0.05056314915418625,
0.5793651342391968,
0.740013837814331,
-0.6508103013038635,
-0.23783960938453674,
-0.7102248668670654,
-0.04782580211758613... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
arnabdhar/Kaggle_NER | arnabdhar | 2023-11-24T06:24:10Z | 30 | 0 | null | [
"region:us"
] | 2023-11-24T06:24:10Z | 2023-11-24T05:46:58.000Z | 2023-11-24T05:46:58 | ---
dataset_info:
features:
- name: text
sequence: string
- name: labels
sequence:
class_label:
names:
'0': O
'1': B-GEO
'2': B-TIM
'3': B-ORG
'4': I-PER
'5': B-PER
'6': I-ORG
'7': B-GPE
'8': I-GEO
'9': I-TIM
'10': B-ART
'11': B-EVE
'12': I-ART
'13': I-EVE
'14': B-NAT
'15': I-GPE
'16': I-NAT
splits:
- name: train
num_bytes: 14377383.453387268
num_examples: 38367
- name: test
num_bytes: 3594439.5466127316
num_examples: 9592
download_size: 4029143
dataset_size: 17971823.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
| [
-0.12853369116783142,
-0.18616779148578644,
0.6529126167297363,
0.49436280131340027,
-0.193193256855011,
0.2360745668411255,
0.36071979999542236,
0.05056314915418625,
0.5793651342391968,
0.740013837814331,
-0.6508103013038635,
-0.23783960938453674,
-0.7102248668670654,
-0.04782580211758613... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
jsonifize/open-instruct-sharegpt_stringified-jsonifize | jsonifize | 2023-11-24T14:08:17Z | 30 | 0 | null | [
"region:us"
] | 2023-11-24T14:08:17Z | 2023-11-24T14:06:37.000Z | 2023-11-24T14:06:37 | Entry not found | [
-0.3227649927139282,
-0.225684255361557,
0.862226128578186,
0.43461498618125916,
-0.5282987952232361,
0.7012963891029358,
0.7915717363357544,
0.07618629932403564,
0.7746025919914246,
0.2563219666481018,
-0.7852816581726074,
-0.2257382869720459,
-0.9104480743408203,
0.5715669393539429,
-0... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
jsonifize/SlimOrca_stringified-jsonifize | jsonifize | 2023-11-24T14:09:03Z | 30 | 0 | null | [
"region:us"
] | 2023-11-24T14:09:03Z | 2023-11-24T14:08:28.000Z | 2023-11-24T14:08:28 | Entry not found | [
-0.3227649927139282,
-0.225684255361557,
0.862226128578186,
0.43461498618125916,
-0.5282987952232361,
0.7012963891029358,
0.7915717363357544,
0.07618629932403564,
0.7746025919914246,
0.2563219666481018,
-0.7852816581726074,
-0.2257382869720459,
-0.9104480743408203,
0.5715669393539429,
-0... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
jsonifize/Synthia-v1.3_stringified-jsonifize | jsonifize | 2023-11-24T14:09:15Z | 30 | 0 | null | [
"region:us"
] | 2023-11-24T14:09:15Z | 2023-11-24T14:09:04.000Z | 2023-11-24T14:09:04 | Entry not found | [
-0.3227649927139282,
-0.225684255361557,
0.862226128578186,
0.43461498618125916,
-0.5282987952232361,
0.7012963891029358,
0.7915717363357544,
0.07618629932403564,
0.7746025919914246,
0.2563219666481018,
-0.7852816581726074,
-0.2257382869720459,
-0.9104480743408203,
0.5715669393539429,
-0... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
jsonifize/Tested-188k-Python-Alpaca_stringified-jsonifize | jsonifize | 2023-11-24T14:09:20Z | 30 | 0 | null | [
"region:us"
] | 2023-11-24T14:09:20Z | 2023-11-24T14:09:16.000Z | 2023-11-24T14:09:16 | Entry not found | [
-0.3227649927139282,
-0.225684255361557,
0.862226128578186,
0.43461498618125916,
-0.5282987952232361,
0.7012963891029358,
0.7915717363357544,
0.07618629932403564,
0.7746025919914246,
0.2563219666481018,
-0.7852816581726074,
-0.2257382869720459,
-0.9104480743408203,
0.5715669393539429,
-0... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
mozilla-foundation/common_voice_6_0 | mozilla-foundation | 2023-07-29T16:00:06Z | 29 | 0 | common-voice | [
"task_categories:automatic-speech-recognition",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:multilingual",
"source_datasets:extended|common_voice",
"license:cc0-1.0",
"arxiv:1912.06670",
"region:us"
] | 2023-07-29T16:00:06Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
license:
- cc0-1.0
multilinguality:
- multilingual
size_categories:
ab:
- n<1K
ar:
- 10K<n<100K
as:
- n<1K
br:
- 10K<n<100K
ca:
- 100K<n<1M
cnh:
- 1K<n<10K
cs:
- 10K<n<100K
cv:
- 10K<n<100K
cy:
- 10K<n<100K
de:
- 100K<n<1M
dv:
- 10K<n<100K
el:
- 10K<n<100K
en:
- 1M<n<10M
eo:
- 10K<n<100K
es:
- 100K<n<1M
et:
- 10K<n<100K
eu:
- 10K<n<100K
fa:
- 100K<n<1M
fi:
- 1K<n<10K
fr:
- 100K<n<1M
fy-NL:
- 10K<n<100K
ga-IE:
- 1K<n<10K
hi:
- n<1K
hsb:
- 1K<n<10K
hu:
- 1K<n<10K
ia:
- 1K<n<10K
id:
- 10K<n<100K
it:
- 100K<n<1M
ja:
- 1K<n<10K
ka:
- 1K<n<10K
kab:
- 100K<n<1M
ky:
- 10K<n<100K
lg:
- 1K<n<10K
lt:
- 1K<n<10K
lv:
- 1K<n<10K
mn:
- 10K<n<100K
mt:
- 10K<n<100K
nl:
- 10K<n<100K
or:
- 1K<n<10K
pa-IN:
- 1K<n<10K
pl:
- 100K<n<1M
pt:
- 10K<n<100K
rm-sursilv:
- 1K<n<10K
rm-vallader:
- 1K<n<10K
ro:
- 1K<n<10K
ru:
- 10K<n<100K
rw:
- 1M<n<10M
sah:
- 1K<n<10K
sl:
- 1K<n<10K
sv-SE:
- 10K<n<100K
ta:
- 10K<n<100K
th:
- 10K<n<100K
tr:
- 10K<n<100K
tt:
- 10K<n<100K
uk:
- 10K<n<100K
vi:
- 1K<n<10K
vot:
- n<1K
zh-CN:
- 10K<n<100K
zh-HK:
- 10K<n<100K
zh-TW:
- 10K<n<100K
source_datasets:
- extended|common_voice
paperswithcode_id: common-voice
pretty_name: Common Voice Corpus 6.0
language_bcp47:
- ab
- ar
- as
- br
- ca
- cnh
- cs
- cv
- cy
- de
- dv
- el
- en
- eo
- es
- et
- eu
- fa
- fi
- fr
- fy-NL
- ga-IE
- hi
- hsb
- hu
- ia
- id
- it
- ja
- ka
- kab
- ky
- lg
- lt
- lv
- mn
- mt
- nl
- or
- pa-IN
- pl
- pt
- rm-sursilv
- rm-vallader
- ro
- ru
- rw
- sah
- sl
- sv-SE
- ta
- th
- tr
- tt
- uk
- vi
- vot
- zh-CN
- zh-HK
- zh-TW
extra_gated_prompt: By clicking on “Access repository” below, you also agree to not
attempt to determine the identity of speakers in the Common Voice dataset.
task_categories:
- automatic-speech-recognition
---
# Dataset Card for Common Voice Corpus 6.0
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://commonvoice.mozilla.org/en/datasets
- **Repository:** https://github.com/common-voice/common-voice
- **Paper:** https://arxiv.org/abs/1912.06670
- **Leaderboard:** https://paperswithcode.com/dataset/common-voice
- **Point of Contact:** [Anton Lozhkov](mailto:anton@huggingface.co)
### Dataset Summary
The Common Voice dataset consists of a unique MP3 and corresponding text file.
Many of the 9261 recorded hours in the dataset also include demographic metadata like age, sex, and accent
that can help improve the accuracy of speech recognition engines.
The dataset currently consists of 7327 validated hours in 60 languages, but more voices and languages are always added.
Take a look at the [Languages](https://commonvoice.mozilla.org/en/languages) page to request a language or start contributing.
### Supported Tasks and Leaderboards
The results for models trained on the Common Voice datasets are available via the
[🤗 Speech Bench](https://huggingface.co/spaces/huggingface/hf-speech-bench)
### Languages
```
Abkhaz, Arabic, Assamese, Basque, Breton, Catalan, Chinese (China), Chinese (Hong Kong), Chinese (Taiwan), Chuvash, Czech, Dhivehi, Dutch, English, Esperanto, Estonian, Finnish, French, Frisian, Georgian, German, Greek, Hakha Chin, Hindi, Hungarian, Indonesian, Interlingua, Irish, Italian, Japanese, Kabyle, Kinyarwanda, Kyrgyz, Latvian, Lithuanian, Luganda, Maltese, Mongolian, Odia, Persian, Polish, Portuguese, Punjabi, Romanian, Romansh Sursilvan, Romansh Vallader, Russian, Sakha, Slovenian, Sorbian, Upper, Spanish, Swedish, Tamil, Tatar, Thai, Turkish, Ukrainian, Vietnamese, Votic, Welsh
```
## Dataset Structure
### Data Instances
A typical data point comprises the `path` to the audio file and its `sentence`.
Additional fields include `accent`, `age`, `client_id`, `up_votes`, `down_votes`, `gender`, `locale` and `segment`.
```python
{
'client_id': 'd59478fbc1ee646a28a3c652a119379939123784d99131b865a89f8b21c81f69276c48bd574b81267d9d1a77b83b43e6d475a6cfc79c232ddbca946ae9c7afc5',
'path': 'et/clips/common_voice_et_18318995.mp3',
'audio': {
'path': 'et/clips/common_voice_et_18318995.mp3',
'array': array([-0.00048828, -0.00018311, -0.00137329, ..., 0.00079346, 0.00091553, 0.00085449], dtype=float32),
'sampling_rate': 48000
},
'sentence': 'Tasub kokku saada inimestega, keda tunned juba ammust ajast saati.',
'up_votes': 2,
'down_votes': 0,
'age': 'twenties',
'gender': 'male',
'accent': '',
'locale': 'et',
'segment': ''
}
```
### Data Fields
`client_id` (`string`): An id for which client (voice) made the recording
`path` (`string`): The path to the audio file
`audio` (`dict`): A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`.
`sentence` (`string`): The sentence the user was prompted to speak
`up_votes` (`int64`): How many upvotes the audio file has received from reviewers
`down_votes` (`int64`): How many downvotes the audio file has received from reviewers
`age` (`string`): The age of the speaker (e.g. `teens`, `twenties`, `fifties`)
`gender` (`string`): The gender of the speaker
`accent` (`string`): Accent of the speaker
`locale` (`string`): The locale of the speaker
`segment` (`string`): Usually an empty field
### Data Splits
The speech material has been subdivided into portions for dev, train, test, validated, invalidated, reported and other.
The validated data is data that has been validated with reviewers and received upvotes that the data is of high quality.
The invalidated data is data has been invalidated by reviewers
and received downvotes indicating that the data is of low quality.
The reported data is data that has been reported, for different reasons.
The other data is data that has not yet been reviewed.
The dev, test, train are all data that has been reviewed, deemed of high quality and split into dev, test and train.
## Data Preprocessing Recommended by Hugging Face
The following are data preprocessing steps advised by the Hugging Face team. They are accompanied by an example code snippet that shows how to put them to practice.
Many examples in this dataset have trailing quotations marks, e.g _“the cat sat on the mat.“_. These trailing quotation marks do not change the actual meaning of the sentence, and it is near impossible to infer whether a sentence is a quotation or not a quotation from audio data alone. In these cases, it is advised to strip the quotation marks, leaving: _the cat sat on the mat_.
In addition, the majority of training sentences end in punctuation ( . or ? or ! ), whereas just a small proportion do not. In the dev set, **almost all** sentences end in punctuation. Thus, it is recommended to append a full-stop ( . ) to the end of the small number of training examples that do not end in punctuation.
```python
from datasets import load_dataset
ds = load_dataset("mozilla-foundation/common_voice_6_0", "en", use_auth_token=True)
def prepare_dataset(batch):
"""Function to preprocess the dataset with the .map method"""
transcription = batch["sentence"]
if transcription.startswith('"') and transcription.endswith('"'):
# we can remove trailing quotation marks as they do not affect the transcription
transcription = transcription[1:-1]
if transcription[-1] not in [".", "?", "!"]:
# append a full-stop to sentences that do not end in punctuation
transcription = transcription + "."
batch["sentence"] = transcription
return batch
ds = ds.map(prepare_dataset, desc="preprocess dataset")
```
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset.
## Considerations for Using the Data
### Social Impact of Dataset
The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset.
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
Public Domain, [CC-0](https://creativecommons.org/share-your-work/public-domain/cc0/)
### Citation Information
```
@inproceedings{commonvoice:2020,
author = {Ardila, R. and Branson, M. and Davis, K. and Henretty, M. and Kohler, M. and Meyer, J. and Morais, R. and Saunders, L. and Tyers, F. M. and Weber, G.},
title = {Common Voice: A Massively-Multilingual Speech Corpus},
booktitle = {Proceedings of the 12th Conference on Language Resources and Evaluation (LREC 2020)},
pages = {4211--4215},
year = 2020
}
```
| [
-0.5394322276115417,
-0.7267853021621704,
0.13297699391841888,
0.4442981779575348,
-0.25714555382728577,
0.03209017962217331,
-0.5724020600318909,
-0.22764791548252106,
0.44092145562171936,
0.5437489151954651,
-0.7765343189239502,
-0.9643453359603882,
-0.44543853402137756,
0.24420461058616... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ncduy/mt-en-vi | ncduy | 2022-10-22T15:08:45Z | 29 | 4 | null | [
"annotations_creators:found",
"language_creators:found",
"multilinguality:translation",
"size_categories:1M<n<10M",
"source_datasets:own",
"source_datasets:open_subtitles",
"source_datasets:tatoeba",
"source_datasets:opus_tedtalks",
"source_datasets:qed_amara",
"source_datasets:opus_wikipedia",
... | 2022-10-22T15:08:45Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | ---
annotations_creators:
- found
language_creators:
- found
language:
- en
- vi
license:
- mit
multilinguality:
- translation
pretty_name: "Machine Translation Paired English-Vietnamese Sentences"
size_categories:
- 1M<n<10M
source_datasets:
- own
- open_subtitles
- tatoeba
- opus_tedtalks
- qed_amara
- opus_wikipedia
task_categories:
- conditional-text-generation
task_ids:
- machine-translation
---
# Dataset Card for Machine Translation Paired English-Vietnamese Sentences
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
The language of the dataset text sentence is English ('en') and Vietnamese (`vi`).
## Dataset Structure
### Data Instances
An instance example:
```
{
'en': 'And what I think the world needs now is more connections.',
'vi': 'Và tôi nghĩ điều thế giới đang cần bây giờ là nhiều sự kết nối hơn.',
'source': 'TED2020 v1'
}
```
### Data Fields
- `en` (str): English sentence
- `vi` (str): Vietnamese sentence
- `source` (str): Source.
### Data Splits
The dataset is split in train, validation and test.
| | Tain | Validation | Test |
|--------------------|------:|-----------:|-----:|
| Number of examples |2884451| 11316| 11225|
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@ncduy0303](https://github.com/ncduy0303) for adding this dataset. | [
-0.2942553162574768,
-0.7858757972717285,
0.245990589261055,
0.2832227051258087,
-0.42503368854522705,
-0.04619917273521423,
-0.35270288586616516,
-0.17741572856903076,
0.41560474038124084,
0.8373029828071594,
-0.6405255794525146,
-0.970199465751648,
-0.6317564249038696,
0.5613563060760498... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
projecte-aina/casum | projecte-aina | 2023-09-13T12:49:03Z | 29 | 0 | null | [
"task_categories:summarization",
"annotations_creators:machine-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:unknown",
"language:ca",
"license:cc-by-nc-4.0",
"arxiv:2202.06871",
"region:us"
] | 2023-09-13T12:49:03Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | ---
annotations_creators:
- machine-generated
language_creators:
- expert-generated
language:
- ca
license:
- cc-by-nc-4.0
multilinguality:
- monolingual
size_categories:
- unknown
source_datasets: []
task_categories:
- summarization
task_ids: []
pretty_name: casum
---
# Dataset Card for CaSum
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Paper:** [Sequence to Sequence Resources for Catalan](https://arxiv.org/pdf/2202.06871.pdf)
- **Point of Contact:** [Ona de Gibert Bonet](mailto:ona.degibert@bsc.es)
### Dataset Summary
CaSum is a summarization dataset. It is extracted from a newswire corpus crawled from the Catalan News Agency ([Agència Catalana de Notícies; ACN](https://www.acn.cat/)). The corpus consists of 217,735 instances that are composed by the headline and the body.
### Supported Tasks and Leaderboards
The dataset can be used to train a model for abstractive summarization. Success on this task is typically measured by achieving a high Rouge score. The [mbart-base-ca-casum](https://huggingface.co/projecte-aina/bart-base-ca-casum) model currently achieves a 41.39.
### Languages
The dataset is in Catalan (`ca-ES`).
## Dataset Structure
### Data Instances
```
{
'summary': 'Mapfre preveu ingressar 31.000 milions d’euros al tancament de 2018',
'text': 'L’asseguradora llançarà la seva filial Verti al mercat dels EUA a partir de 2017 ACN Madrid.-Mapfre preveu assolir uns ingressos de 31.000 milions d'euros al tancament de 2018 i destinarà a retribuir els seus accionistes com a mínim el 50% dels beneficis del grup durant el període 2016-2018, amb una rendibilitat mitjana a l’entorn del 5%, segons ha anunciat la companyia asseguradora durant la celebració aquest divendres de la seva junta general d’accionistes. La firma asseguradora també ha avançat que llançarà la seva filial d’automoció i llar al mercat dels EUA a partir de 2017. Mapfre ha recordat durant la junta que va pagar més de 540 milions d'euros en impostos el 2015, amb una taxa impositiva efectiva del 30,4 per cent. La companyia també ha posat en marxa el Pla de Sostenibilitat 2016-2018 i el Pla de Transparència Activa, “que han de contribuir a afermar la visió de Mapfre com a asseguradora global de confiança”, segons ha informat en un comunicat.'
}
```
### Data Fields
- `summary` (str): Summary of the piece of news
- `text` (str): The text of the piece of news
### Data Splits
We split our dataset into train, dev and test splits
- train: 197,735 examples
- validation: 10,000 examples
- test: 10,000 examples
## Dataset Creation
### Curation Rationale
We created this corpus to contribute to the development of language models in Catalan, a low-resource language. There exist few resources for summarization in Catalan.
### Source Data
#### Initial Data Collection and Normalization
We obtained each headline and its corresponding body of each news piece on the Catalan News Agency ([Agència Catalana de Notícies; ACN](https://www.acn.cat/)) website and applied the following cleaning pipeline: deduplicating the documents, removing the documents with empty attributes, and deleting some boilerplate sentences.
#### Who are the source language producers?
The news portal Catalan News Agency ([Agència Catalana de Notícies; ACN](https://www.acn.cat/)).
### Annotations
The dataset is unannotated.
#### Annotation process
[N/A]
#### Who are the annotators?
[N/A]
### Personal and Sensitive Information
Since all data comes from public websites, no anonymization process was performed.
## Considerations for Using the Data
### Social Impact of Dataset
We hope this corpus contributes to the development of summarization models in Catalan, a low-resource language.
### Discussion of Biases
We are aware that since the data comes from unreliable web pages, some biases may be present in the dataset. Nonetheless, we have not applied any steps to reduce their impact.
### Other Known Limitations
[N/A]
## Additional Information
### Dataset Curators
Text Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@bsc.es)
This work was funded by MT4All CEF project and [Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya](https://politiquesdigitals.gencat.cat/ca/inici/index.html#googtrans(ca|en) within the framework of [Projecte AINA](https://politiquesdigitals.gencat.cat/ca/economia/catalonia-ai/aina).
### Licensing information
[Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0/).
### BibTeX citation
If you use any of these resources (datasets or models) in your work, please cite our latest preprint:
```bibtex
@misc{degibert2022sequencetosequence,
title={Sequence-to-Sequence Resources for Catalan},
author={Ona de Gibert and Ksenia Kharitonova and Blanca Calvo Figueras and Jordi Armengol-Estapé and Maite Melero},
year={2022},
eprint={2202.06871},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Contributions
[N/A] | [
-0.3905108869075775,
-0.30675816535949707,
0.05086180195212364,
0.5478679537773132,
-0.4981723427772522,
0.1648985594511032,
-0.05466095358133316,
-0.2807738184928894,
0.8250936269760132,
0.5419554710388184,
-0.3033694922924042,
-1.0093499422073364,
-0.6278446912765503,
0.25059691071510315... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
stas/wmt16-en-ro-pre-processed | stas | 2021-02-16T03:58:06Z | 29 | 0 | null | [
"region:us"
] | 2021-02-16T03:58:06Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | # WMT16 English-Romanian Translation Data w/ further preprocessing
The original instructions are [here](https://github.com/rsennrich/wmt16-scripts/tree/master/sample).
This pre-processed dataset was created by running:
```
git clone https://github.com/rsennrich/wmt16-scripts
cd wmt16-scripts
cd sample
./download_files.sh
./preprocess.sh
```
It was originally used by `transformers` [`finetune_trainer.py`](https://github.com/huggingface/transformers/blob/641f418e102218c4bf16fcd3124bfebed6217ef6/examples/seq2seq/finetune_trainer.py)
The data itself resides at https://cdn-datasets.huggingface.co/translation/wmt_en_ro.tar.gz
If you would like to convert it to jsonlines I've included a small script `convert-to-jsonlines.py` that will do it for you. But if you're using the `datasets` API, it will be done on the fly.
| [
-0.6582688093185425,
-0.6106510758399963,
0.3238263726234436,
0.27077096700668335,
-0.5737007856369019,
-0.10544995963573456,
-0.4713524281978607,
-0.2841781675815582,
0.35242047905921936,
0.6116398572921753,
-0.9972092509269714,
-0.6513434648513794,
-0.5360673069953918,
0.3167568445205688... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
stevhliu/demo | stevhliu | 2022-10-24T18:02:42Z | 29 | 0 | null | [
"task_categories:summarization",
"task_categories:text2text-generation",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:en",
"license:apache-2.0",
"conditional-text-generation",
"r... | 2022-10-24T18:02:42Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- en
license:
- apache-2.0
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- summarization
- text2text-generation
task_ids: []
tags:
- conditional-text-generation
---
# Dataset Card for Demo
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This is a demo dataset with two files `train.csv` and `test.csv`.
Load it by:
```python
from datasets import load_dataset
data_files = {"train": "train.csv", "test": "test.csv"}
demo = load_dataset("stevhliu/demo", data_files=data_files)
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset. | [
-0.3943271040916443,
-0.5129885077476501,
0.054255615919828415,
0.2910892963409424,
-0.17377373576164246,
0.18770764768123627,
-0.42581048607826233,
-0.3550795614719391,
0.5684940814971924,
0.544796347618103,
-0.896458625793457,
-1.1392998695373535,
-0.6151477694511414,
0.05347051844000816... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ruanchaves/loyola | ruanchaves | 2022-10-20T19:13:04Z | 29 | 0 | null | [
"annotations_creators:expert-generated",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:original",
"language:code",
"license:unknown",
"word-segmentation",
"region:us"
] | 2022-10-20T19:13:04Z | 2022-03-05T19:23:21.000Z | 2022-03-05T19:23:21 | ---
annotations_creators:
- expert-generated
language_creators:
- machine-generated
language:
- code
license:
- unknown
multilinguality:
- monolingual
size_categories:
- unknown
source_datasets:
- original
task_categories:
- structure-prediction
task_ids: []
pretty_name: The Loyola University of Delaware Identifier Splitting Oracle
tags:
- word-segmentation
---
# Dataset Card for The Loyola University of Delaware Identifier Splitting Oracle
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Dataset Creation](#dataset-creation)
- [Additional Information](#additional-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** [Loyola University of Delaware Identifier Splitting Oracle](http://www.cs.loyola.edu/~binkley/ludiso/)
- **Paper:** [An empirical study of identifier splitting techniques](https://dl.acm.org/doi/10.1007/s10664-013-9261-0)
### Dataset Summary
In programming languages, identifiers are tokens (also called symbols) which name language entities.
Some of the kinds of entities an identifier might denote include variables, types, labels, subroutines, and packages.
The Loyola University of Delaware Identifier Splitting Oracle is a dataset for identifier segmentation,
i.e. the task of adding spaces between the words on a identifier.
### Languages
- Java
- C
- C++
## Dataset Structure
### Data Instances
```
{
"index": 0,
"identifier": "::CreateProcess",
"segmentation": ":: Create Process",
"language": "cpp",
"source": "mozilla-source-1.1"
}
```
### Data Fields
- `index`: a numerical index.
- `identifier`: the original identifier.
- `segmentation`: the gold segmentation for the identifier.
- `language`: the programming language of the source.
- `source`: the source of the identifier.
## Dataset Creation
- All hashtag segmentation and identifier splitting datasets on this profile have the same basic fields: `hashtag` and `segmentation` or `identifier` and `segmentation`.
- The only difference between `hashtag` and `segmentation` or between `identifier` and `segmentation` are the whitespace characters. Spell checking, expanding abbreviations or correcting characters to uppercase go into other fields.
- There is always whitespace between an alphanumeric character and a sequence of any special characters ( such as `_` , `:`, `~` ).
- If there are any annotations for named entity recognition and other token classification tasks, they are given in a `spans` field.
### Citation Information
```
@article{hill2014empirical,
title={An empirical study of identifier splitting techniques},
author={Hill, Emily and Binkley, David and Lawrie, Dawn and Pollock, Lori and Vijay-Shanker, K},
journal={Empirical Software Engineering},
volume={19},
number={6},
pages={1754--1780},
year={2014},
publisher={Springer}
}
```
### Contributions
This dataset was added by [@ruanchaves](https://github.com/ruanchaves) while developing the [hashformers](https://github.com/ruanchaves/hashformers) library. | [
-0.6217297315597534,
-0.6610732078552246,
0.3080144226551056,
0.3345341384410858,
-0.519650399684906,
0.2806468605995178,
0.06052607297897339,
-0.48185640573501587,
0.5454756617546082,
0.269501656293869,
-0.5399247407913208,
-0.6675209403038025,
-0.4945134222507477,
0.13530907034873962,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
shpotes/bosch-small-traffic-lights-dataset | shpotes | 2022-03-10T20:00:45Z | 29 | 4 | null | [
"license:other",
"region:us"
] | 2022-03-10T20:00:45Z | 2022-03-08T14:48:14.000Z | 2022-03-08T14:48:14 | ---
license: other
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
umanlp/xscitldr | umanlp | 2022-07-04T13:49:25Z | 29 | 0 | null | [
"region:us"
] | 2022-07-04T13:49:25Z | 2022-03-17T14:30:16.000Z | 2022-03-17T14:30:16 | **X-SCITLDR**: Cross-Lingual Extreme Summarization of Scholarly Documents
# X-SCITLDR
The number of scientific publications nowadays is rapidly increasing, causing information overload for researchers and making it hard for scholars to keep up to date with current trends and lines of work. Consequently, recent work on applying text mining technologies for scholarly publications has investigated the application of automatic text summarization technologies, including extreme summarization, for this domain. However, previous work has concentrated only on monolingual settings, primarily in English. In this paper, we fill this research gap and present an abstractive cross-lingual summarization dataset for four different languages in the scholarly domain, which enables us to train and evaluate models that process English papers and generate summaries in German, Italian, Chinese and Japanese. We present our new X-SCITLDR dataset for multilingual summarization and thoroughly benchmark different models based on a state-of-the-art multilingual pre-trained model, including a two-stage summarize and translate approach and a direct cross-lingual model. We additionally explore the benefits of intermediate-stage training using English monolingual summarization and machine translation as intermediate tasks and analyze performance in zero- and few-shot scenarios.
# Languages
- German
- Italian
- Chinese
- Japanese
# Related
- [Paper](https://dl.acm.org/doi/abs/10.1145/3529372.3530938)
- [Code](https://github.com/sobamchan/xscitldr/)
- [Contact](mailto:sotaro.takeshita@uni-mannheim.de)
# Citation Information
```
@inproceedings{takeshita-etal-2022-xsci,
author = {Takeshita, Sotaro and Green, Tommaso and Friedrich, Niklas and Eckert, Kai and Ponzetto, Simone Paolo},
title = {X-SCITLDR: Cross-Lingual Extreme Summarization of Scholarly Documents},
year = {2022},
isbn = {9781450393454},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3529372.3530938},
doi = {10.1145/3529372.3530938},
abstract = {The number of scientific publications nowadays is rapidly increasing, causing information overload for researchers and making it hard for scholars to keep up to date with current trends and lines of work. Consequently, recent work on applying text mining technologies for scholarly publications has investigated the application of automatic text summarization technologies, including extreme summarization, for this domain. However, previous work has concentrated only on monolingual settings, primarily in English. In this paper, we fill this research gap and present an abstractive cross-lingual summarization dataset for four different languages in the scholarly domain, which enables us to train and evaluate models that process English papers and generate summaries in German, Italian, Chinese and Japanese. We present our new X-SCITLDR dataset for multilingual summarization and thoroughly benchmark different models based on a state-of-the-art multilingual pre-trained model, including a two-stage 'summarize and translate' approach and a direct cross-lingual model. We additionally explore the benefits of intermediate-stage training using English monolingual summarization and machine translation as intermediate tasks and analyze performance in zero- and few-shot scenarios.},
booktitle = {Proceedings of the 22nd ACM/IEEE Joint Conference on Digital Libraries},
articleno = {4},
numpages = {12},
keywords = {scholarly document processing, summarization, multilinguality},
location = {Cologne, Germany},
series = {JCDL '22}
}
``` | [
-0.13864631950855255,
-0.37231749296188354,
0.055898915976285934,
0.15900960564613342,
-0.38011524081230164,
0.2850624918937683,
-0.17786334455013275,
-0.6894975900650024,
0.570763349533081,
0.015199391171336174,
-0.4133633077144623,
-0.7612108588218689,
-0.5066304802894592,
0.433140516281... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
westphal-jan/mnli_matched | westphal-jan | 2022-04-16T12:02:51Z | 29 | 0 | null | [
"task_categories:text-classification",
"task_ids:text-scoring",
"task_ids:semantic-similarity-scoring",
"source_datasets:multi_nli",
"region:us"
] | 2022-04-16T12:02:51Z | 2022-04-11T10:06:59.000Z | 2022-04-11T10:06:59 | ---
source_datasets:
- multi_nli
task_categories:
- text-classification
task_ids:
- text-scoring
- semantic-similarity-scoring
---
## Dataset Description
This dataset provides easier accessibility to the original [MNLI dataset](https://huggingface.co/datasets/multi_nli).
We randomly choose 10% of the original `validation_matched` split and use it as the validation split.
The remaining 90% are used for the test split.
The train split remains unchanged. | [
-0.5133389234542847,
-0.14374741911888123,
-0.03654272109270096,
0.380784273147583,
-0.17031916975975037,
-0.20825354754924774,
0.15229631960391998,
-0.18816858530044556,
0.26274701952934265,
0.5625986456871033,
-0.9210729598999023,
-0.08492821455001831,
-0.11172296106815338,
0.30577969551... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
csebuetnlp/CrossSum | csebuetnlp | 2023-07-06T08:03:28Z | 29 | 7 | null | [
"task_categories:summarization",
"task_ids:news-articles-summarization",
"annotations_creators:found",
"language_creators:found",
"multilinguality:multilingual",
"size_categories:1M<n<10M",
"source_datasets:original",
"language:am",
"language:ar",
"language:az",
"language:bn",
"language:my",
... | 2023-07-06T08:03:28Z | 2022-04-20T08:27:10.000Z | 2022-04-20T08:27:10 | ---
task_categories:
- summarization
task_ids:
- news-articles-summarization
language:
- am
- ar
- az
- bn
- my
- zh
- en
- fr
- gu
- ha
- hi
- ig
- id
- ja
- rn
- ko
- ky
- mr
- ne
- om
- ps
- fa
- pcm
- pt
- pa
- ru
- gd
- sr
- si
- so
- es
- sw
- ta
- te
- th
- ti
- tr
- uk
- ur
- uz
- vi
- cy
- yo
size_categories:
- 1M<n<10M
license:
- cc-by-nc-sa-4.0
multilinguality:
- multilingual
source_datasets:
- original
annotations_creators:
- found
language_creators:
- found
pretty_name: CrossSum
---
# Dataset Card for "CrossSum"
## Table of Contents
- [Dataset Card Creation Guide](#dataset-card-creation-guide)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** [https://github.com/csebuetnlp/CrossSum](https://github.com/csebuetnlp/CrossSum)
- **Paper:** [CrossSum: Beyond English-Centric Cross-Lingual Abstractive Text Summarization for 1500+ Language Pairs](https://arxiv.org/abs/2112.08804)
- **Point of Contact:** [Tahmid Hasan](mailto:tahmidhasan@cse.buet.ac.bd)
### Dataset Summary
We present CrossSum, a large-scale dataset
comprising 1.70 million cross-lingual article summary samples in 1500+ language-pairs
constituting 45 languages. We use the multilingual XL-Sum dataset and align identical
articles written in different languages via crosslingual retrieval using a language-agnostic
representation model.
### Supported Tasks and Leaderboards
[More information needed](https://github.com/csebuetnlp/CrossSum)
### Languages
- `amharic`
- `arabic`
- `azerbaijani`
- `bengali`
- `burmese`
- `chinese_simplified`
- `chinese_traditional`
- `english`
- `french`
- `gujarati`
- `hausa`
- `hindi`
- `igbo`
- `indonesian`
- `japanese`
- `kirundi`
- `korean`
- `kyrgyz`
- `marathi`
- `nepali`
- `oromo`
- `pashto`
- `persian`
- `pidgin`
- `portuguese`
- `punjabi`
- `russian`
- `scottish_gaelic`
- `serbian_cyrillic`
- `serbian_latin`
- `sinhala`
- `somali`
- `spanish`
- `swahili`
- `tamil`
- `telugu`
- `thai`
- `tigrinya`
- `turkish`
- `ukrainian`
- `urdu`
- `uzbek`
- `vietnamese`
- `welsh`
- `yoruba`
## Loading the dataset
```python
from datasets import load_dataset
# for available language names, see above
src_lang = "english"
tgt_lang = "bengali"
ds = load_dataset(f"csebuetnlp/CrossSum", "{}-{}".format(src_lang, tgt_lang))
```
## Dataset Structure
### Data Instances
One example from the `English` dataset is given below in JSON format.
```
{
"source_url": "https://www.bbc.com/japanese/53074000",
"target_url": "https://www.bbc.com/bengali/news-53064712",
"summary": "বিজ্ঞানীরা বলছেন ডেক্সামেথাসোন নামে সস্তা ও সহজলভ্য একটি ওষুধ করোনাভাইরাসে গুরুতর অসুস্থ রোগীদের জীবন রক্ষা করতে সাহায্য করবে।",
"text": "ミシェル・ロバーツ、BBCニュースオンライン健康担当編集長 英オックスフォード大学の研究チームによると、低用量のデキサメタゾンは新型ウイルスとの戦いで画期的な突破口になる。 新型コロナウイルスに対し、様々な既存の治療法の効果を試す世界的規模の臨床試験の一貫として、デキサメタゾンが試された。 その結果、人工呼吸器を必要とする重症患者の致死率が3割下がり、酸素供給を必要とする患者の場合は2割下がった。 新型ウイルスのパンデミック(世界的流行)の初期からイギリスでデキサメタゾンを治療に使用していた場合、最大5000人の命が救えたはずだと研究者たちは言う。 さらに、新型コロナウイルスによる感染症「COVID-19」の患者が多く出ている貧しい国にとっても、安価なデキサメタゾンを使う治療は大いに役立つと期待される。 重症者の致死率が大幅に下がる イギリス政府は20万人分の投与量を備蓄しており、国民医療制度の国民保健サービス(NHS)で患者への使用を開始する方針を示した。 ボリス・ジョンソン英首相は「イギリス科学界の素晴らしい成果」を歓迎し、「たとえ感染の第2波が来ても備蓄が足りるよう、数を確保するための措置をとった」と述べた。 イングランド首席医務官クリス・ウィッティー教授は、「COVID-19にとってこれまでで一番重要な臨床試験結果だ。手に入りやすく安全でなじみのある薬によって、酸素供給や人工呼吸器が必要な人の致死率が大幅に下がった。(中略)この発見が世界中で人命を救う」と評価した。 <関連記事> 新型コロナウイルスに20人が感染した場合、19人は入院しないまま回復する。入院する人もほとんどは回復するものの、重症化して酸素供給や人工呼吸器を必要とする人もいる。 デキサメタゾンはこうした重症患者の治療に効果があるもよう。 新型ウイルスに感染した患者の体内では、ウイルスと戦う免疫系が暴走することがある。その免疫系の過剰反応による体の損傷を、デキサメタゾンが緩和するものとみられる。 「サイトカイン・ストーム」と呼ばれる免疫系の過剰反応が、患者の命を奪うこともある。 デキサメタゾンはすでに抗炎症剤として、ぜんそくや皮膚炎など様々な症状の治療に使われている。 初めて致死率を下げる薬 オックスフォード大学が主導する臨床試験は、約2000人の入院患者にデキサメタゾンを投与。それ以外の4000人以上の患者と容体を比較した。 人工呼吸器を使用する患者については、死亡リスクが40%から28%に下がった。 酸素供給する患者は、死亡リスクが25%から20%に下がった。 研究チームのピーター・ホービー教授は、「今のところ、致死率を実際に下げる結果が出たのは、この薬だけだ。しかも、致死率をかなり下げる。画期的な突破口だ」と話した。 研究を主導するマーティン・ランドレイ教授によると、人工呼吸器を使う患者の8人に1人、ならびに酸素供給治療を受ける患者の20-25人に1人が、デキサメタゾンで救えることが分かったという。 「これはきわめて明確なメリットだ」と教授は言う。 「最大10日間、デキサメタゾンを投与するという治療法で、費用は患者1人あたり1日約5ポンド(約670円)。つまり、35ポンド(約4700円)で人ひとりの命が救える」 「しかもこれは、世界中で手に入る薬だ」 状況が許す限り、新型コロナウイルスで入院中の患者にはただちに投与を開始すべきだと、ランドレイ教授は促した。 ただし、自宅で自己治療するために薬局に買いに行くべきではないと言う。 デキサメタゾンは、呼吸補助を必要としない軽症の患者には効果がないもよう。 3月に始動した新型コロナウイルス治療薬の無作為化臨床試験「リカバリー・トライアル」は、抗マラリア薬「ヒドロキシクロロキン」も調べたものの、心臓疾患や致死率の悪化につながるという懸念から、ヒドロキシクロロキンについては試験を中止した。 一方で、感染者の回復にかかる時間を短縮するとみられるレムデシビルは、すでにNHSの保険対象になり治療現場で使われている。 <解説> ファーガス・ウォルシュBBC健康担当編集委員 COVID-19の死者を減らすと初めて立証された薬は、高価な新しい薬ではなく、古くからずっと使われてきた、きわめて安いステロイド剤だった。 世界中の患者が直ちにその恩恵を受けることになるので、これは歓迎すべき発見だ。 この臨床試験の最新成果がこれほど急いで発表されたのは、そのためだ。とてつもない影響を世界中にもたらすので。 デキサメタゾンは1960年代初めから、関節リウマチやぜんそくなど、幅広い症状の治療に使われてきた。 これまでは、人工呼吸器を必要とするCOVID-19患者の半数が亡くなってきた。その致死率を3割減らすというのは、絶大な効果だ。 集中治療室では点滴で投与する。もう少し軽症な患者には、錠剤で与える。 これまでのところ、COVID-19患者に効果があると証明された薬は、エボラ治療薬のレムデシビルだけだった。 レムデシビルは症状の回復期間を15日から11日に短縮する。しかし、致死率を下げると言えるだけの証拠は出ていなかった。 デキサメタゾンと異なり、レムデシビルは数の少ない新薬で、薬価もまだ公表されていない。"
}
```
### Data Fields
- 'source_url': A string representing the source article URL.
- 'target_url': A string representing the target article URL.
- 'summary': A string containing the article summary.
- 'text' : A string containing the article text.
### Data Splits
No. of total examples for each language pair are as follows:
Language (ISO 639-1-Code) | am | ar | az | bn | my | zh-CN | zh-TW | en | fr | gu | ha | hi | ig | id | ja | rn | ko | ky | mr | np | om | ps | fa | pcm | pt | pa | ru | gd | sr | sr | si | so | es | sw | ta | te | th | ti | tr | uk | ur | uz | vi | cy | yo
----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | -----
am | -- | 667 | 100 | 272 | 95 | 179 | 167 | 1456 | 358 | 173 | 221 | 377 | 26 | 494 | 264 | 423 | 244 | 92 | 221 | 301 | 21 | 192 | 431 | 209 | 307 | 189 | 347 | 0 | 357 | 365 | 62 | 309 | 351 | 378 | 390 | 329 | 124 | 131 | 435 | 345 | 409 | 41 | 285 | 1 | 67
ar | 667 | -- | 787 | 804 | 652 | 2968 | 2843 | 9653 | 989 | 475 | 747 | 3665 | 86 | 6084 | 1188 | 876 | 707 | 299 | 559 | 854 | 9 | 2161 | 4186 | 436 | 2539 | 547 | 5564 | 1 | 1109 | 1145 | 315 | 1049 | 3654 | 1186 | 1311 | 877 | 367 | 27 | 4147 | 3457 | 4935 | 388 | 2666 | 38 | 141
az | 100 | 787 | -- | 277 | 84 | 371 | 334 | 1317 | 208 | 192 | 126 | 748 | 28 | 1111 | 231 | 188 | 155 | 221 | 194 | 242 | 1 | 252 | 817 | 91 | 678 | 190 | 2238 | 4 | 289 | 283 | 124 | 367 | 704 | 539 | 515 | 245 | 140 | 2 | 1495 | 1383 | 966 | 199 | 725 | 30 | 42
bn | 272 | 804 | 277 | -- | 139 | 318 | 284 | 1549 | 317 | 559 | 231 | 1396 | 35 | 1076 | 342 | 298 | 352 | 154 | 586 | 668 | 2 | 300 | 790 | 135 | 764 | 580 | 838 | 0 | 562 | 564 | 151 | 412 | 701 | 471 | 919 | 793 | 245 | 6 | 860 | 688 | 1382 | 98 | 527 | 37 | 61
my | 95 | 652 | 84 | 139 | -- | 356 | 314 | 685 | 90 | 96 | 74 | 528 | 12 | 761 | 144 | 100 | 112 | 58 | 89 | 152 | 1 | 234 | 426 | 39 | 230 | 86 | 535 | 0 | 115 | 123 | 87 | 79 | 431 | 86 | 185 | 147 | 71 | 4 | 449 | 350 | 591 | 62 | 447 | 4 | 12
zh-CN | 179 | 2968 | 371 | 318 | 356 | -- | 47101 | 4975 | 348 | 201 | 159 | 1379 | 38 | 2851 | 1017 | 240 | 412 | 139 | 240 | 275 | 14 | 559 | 1111 | 149 | 1371 | 250 | 2572 | 2 | 504 | 530 | 166 | 323 | 2002 | 412 | 511 | 353 | 269 | 11 | 1511 | 1619 | 1651 | 176 | 1858 | 33 | 39
zh-TW | 167 | 2843 | 334 | 284 | 314 | 47101 | -- | 4884 | 331 | 174 | 150 | 1213 | 35 | 2588 | 953 | 209 | 382 | 131 | 213 | 252 | 16 | 501 | 967 | 141 | 1271 | 226 | 2286 | 1 | 453 | 494 | 150 | 302 | 1873 | 383 | 465 | 335 | 250 | 12 | 1294 | 1464 | 1444 | 158 | 1663 | 31 | 38
en | 1456 | 9653 | 1317 | 1549 | 685 | 4975 | 4884 | -- | 1889 | 978 | 913 | 4728 | 144 | 10040 | 3040 | 1878 | 1673 | 490 | 1181 | 1614 | 38 | 1522 | 4680 | 1074 | 4744 | 1330 | 9080 | 128 | 3760 | 3809 | 532 | 2141 | 6910 | 2701 | 3156 | 2121 | 1020 | 58 | 5676 | 6562 | 6320 | 450 | 4574 | 2655 | 229
fr | 358 | 989 | 208 | 317 | 90 | 348 | 331 | 1889 | -- | 242 | 477 | 616 | 106 | 1018 | 274 | 735 | 264 | 124 | 241 | 323 | 4 | 196 | 602 | 439 | 921 | 247 | 849 | 2 | 555 | 569 | 98 | 502 | 990 | 872 | 425 | 380 | 185 | 10 | 829 | 721 | 766 | 76 | 438 | 40 | 159
gu | 173 | 475 | 192 | 559 | 96 | 201 | 174 | 978 | 242 | -- | 147 | 5170 | 34 | 710 | 228 | 183 | 268 | 106 | 2091 | 561 | 1 | 246 | 522 | 101 | 529 | 2210 | 582 | 0 | 331 | 345 | 125 | 261 | 540 | 300 | 1762 | 2066 | 164 | 5 | 631 | 508 | 1619 | 80 | 450 | 21 | 54
ha | 221 | 747 | 126 | 231 | 74 | 159 | 150 | 913 | 477 | 147 | -- | 460 | 202 | 901 | 157 | 485 | 135 | 61 | 159 | 239 | 5 | 229 | 487 | 529 | 375 | 157 | 525 | 1 | 258 | 258 | 49 | 391 | 463 | 568 | 299 | 260 | 87 | 9 | 519 | 400 | 526 | 59 | 352 | 30 | 362
hi | 377 | 3665 | 748 | 1396 | 528 | 1379 | 1213 | 4728 | 616 | 5170 | 460 | -- | 65 | 5627 | 623 | 489 | 520 | 234 | 3831 | 1357 | 4 | 1519 | 5351 | 192 | 6563 | 4052 | 4622 | 1 | 809 | 807 | 449 | 747 | 2931 | 893 | 3711 | 3762 | 378 | 7 | 3694 | 3935 | 15666 | 352 | 3738 | 77 | 79
ig | 26 | 86 | 28 | 35 | 12 | 38 | 35 | 144 | 106 | 34 | 202 | 65 | -- | 113 | 24 | 107 | 32 | 16 | 51 | 36 | 3 | 11 | 49 | 255 | 61 | 39 | 79 | 0 | 51 | 51 | 13 | 77 | 91 | 151 | 52 | 54 | 18 | 5 | 91 | 83 | 61 | 15 | 65 | 6 | 296
id | 494 | 6084 | 1111 | 1076 | 761 | 2851 | 2588 | 10040 | 1018 | 710 | 901 | 5627 | 113 | -- | 1274 | 994 | 774 | 347 | 745 | 1104 | 8 | 1430 | 3892 | 367 | 4409 | 725 | 7588 | 7 | 1387 | 1379 | 470 | 1312 | 4547 | 1873 | 1886 | 1131 | 599 | 9 | 5663 | 4829 | 6476 | 432 | 4810 | 145 | 174
ja | 264 | 1188 | 231 | 342 | 144 | 1017 | 953 | 3040 | 274 | 228 | 157 | 623 | 24 | 1274 | -- | 372 | 654 | 140 | 302 | 424 | 2 | 266 | 1014 | 152 | 706 | 269 | 1517 | 2 | 550 | 571 | 109 | 387 | 950 | 425 | 641 | 425 | 305 | 5 | 1242 | 1013 | 797 | 49 | 908 | 25 | 33
rn | 423 | 876 | 188 | 298 | 100 | 240 | 209 | 1878 | 735 | 183 | 485 | 489 | 107 | 994 | 372 | -- | 283 | 106 | 242 | 369 | 18 | 228 | 684 | 398 | 526 | 206 | 711 | 0 | 443 | 450 | 77 | 584 | 607 | 1186 | 521 | 363 | 149 | 13 | 724 | 610 | 617 | 59 | 631 | 20 | 180
ko | 244 | 707 | 155 | 352 | 112 | 412 | 382 | 1673 | 264 | 268 | 135 | 520 | 32 | 774 | 654 | 283 | -- | 99 | 319 | 445 | 1 | 150 | 596 | 130 | 587 | 264 | 649 | 0 | 522 | 543 | 81 | 234 | 613 | 324 | 541 | 452 | 197 | 5 | 680 | 616 | 532 | 54 | 530 | 12 | 45
ky | 92 | 299 | 221 | 154 | 58 | 139 | 131 | 490 | 124 | 106 | 61 | 234 | 16 | 347 | 140 | 106 | 99 | -- | 107 | 167 | 4 | 102 | 252 | 59 | 251 | 118 | 1013 | 1 | 206 | 211 | 45 | 145 | 279 | 150 | 206 | 174 | 109 | 3 | 346 | 508 | 270 | 113 | 201 | 12 | 23
mr | 221 | 559 | 194 | 586 | 89 | 240 | 213 | 1181 | 241 | 2091 | 159 | 3831 | 51 | 745 | 302 | 242 | 319 | 107 | -- | 630 | 1 | 232 | 608 | 138 | 524 | 1797 | 675 | 0 | 419 | 436 | 129 | 270 | 603 | 332 | 1776 | 1886 | 196 | 11 | 706 | 596 | 1395 | 79 | 473 | 16 | 48
np | 301 | 854 | 242 | 668 | 152 | 275 | 252 | 1614 | 323 | 561 | 239 | 1357 | 36 | 1104 | 424 | 369 | 445 | 167 | 630 | -- | 1 | 303 | 916 | 134 | 706 | 545 | 849 | 2 | 553 | 538 | 164 | 420 | 687 | 513 | 994 | 741 | 217 | 7 | 930 | 741 | 1156 | 84 | 719 | 39 | 65
om | 21 | 9 | 1 | 2 | 1 | 14 | 16 | 38 | 4 | 1 | 5 | 4 | 3 | 8 | 2 | 18 | 1 | 4 | 1 | 1 | -- | 2 | 3 | 11 | 4 | 6 | 8 | 0 | 2 | 3 | 0 | 6 | 7 | 5 | 2 | 2 | 1 | 103 | 5 | 10 | 1 | 4 | 2 | 0 | 7
ps | 192 | 2161 | 252 | 300 | 234 | 559 | 501 | 1522 | 196 | 246 | 229 | 1519 | 11 | 1430 | 266 | 228 | 150 | 102 | 232 | 303 | 2 | -- | 2815 | 94 | 594 | 249 | 1246 | 0 | 235 | 242 | 156 | 304 | 766 | 314 | 441 | 314 | 92 | 8 | 1049 | 818 | 2833 | 156 | 657 | 7 | 32
fa | 431 | 4186 | 817 | 790 | 426 | 1111 | 967 | 4680 | 602 | 522 | 487 | 5351 | 49 | 3892 | 1014 | 684 | 596 | 252 | 608 | 916 | 3 | 2815 | -- | 186 | 5512 | 541 | 4328 | 0 | 1028 | 1023 | 276 | 812 | 2512 | 1002 | 1250 | 797 | 364 | 8 | 3695 | 3567 | 6752 | 313 | 3190 | 66 | 74
pcm | 209 | 436 | 91 | 135 | 39 | 149 | 141 | 1074 | 439 | 101 | 529 | 192 | 255 | 367 | 152 | 398 | 130 | 59 | 138 | 134 | 11 | 94 | 186 | -- | 227 | 112 | 322 | 0 | 234 | 246 | 28 | 219 | 314 | 436 | 232 | 162 | 85 | 28 | 287 | 280 | 232 | 18 | 170 | 9 | 462
pt | 307 | 2539 | 678 | 764 | 230 | 1371 | 1271 | 4744 | 921 | 529 | 375 | 6563 | 61 | 4409 | 706 | 526 | 587 | 251 | 524 | 706 | 4 | 594 | 5512 | 227 | -- | 579 | 4452 | 7 | 1371 | 1341 | 231 | 602 | 7112 | 983 | 1042 | 820 | 468 | 3 | 3483 | 4421 | 6759 | 186 | 3754 | 110 | 97
pa | 189 | 547 | 190 | 580 | 86 | 250 | 226 | 1330 | 247 | 2210 | 157 | 4052 | 39 | 725 | 269 | 206 | 264 | 118 | 1797 | 545 | 6 | 249 | 541 | 112 | 579 | -- | 629 | 0 | 410 | 404 | 128 | 283 | 585 | 357 | 1726 | 1892 | 200 | 10 | 643 | 570 | 1515 | 73 | 431 | 16 | 44
ru | 347 | 5564 | 2238 | 838 | 535 | 2572 | 2286 | 9080 | 849 | 582 | 525 | 4622 | 79 | 7588 | 1517 | 711 | 649 | 1013 | 675 | 849 | 8 | 1246 | 4328 | 322 | 4452 | 629 | -- | 5 | 1495 | 1460 | 373 | 1166 | 4864 | 1672 | 1628 | 892 | 595 | 7 | 6223 | 22241 | 5309 | 809 | 3963 | 134 | 125
gd | 0 | 1 | 4 | 0 | 0 | 2 | 1 | 128 | 2 | 0 | 1 | 1 | 0 | 7 | 2 | 0 | 0 | 1 | 0 | 2 | 0 | 0 | 0 | 0 | 7 | 0 | 5 | -- | 2 | 3 | 2 | 1 | 3 | 1 | 0 | 0 | 1 | 0 | 6 | 5 | 2 | 1 | 3 | 36 | 2
sr | 357 | 1109 | 289 | 562 | 115 | 504 | 453 | 3760 | 555 | 331 | 258 | 809 | 51 | 1387 | 550 | 443 | 522 | 206 | 419 | 553 | 2 | 235 | 1028 | 234 | 1371 | 410 | 1495 | 2 | -- | 9041 | 127 | 377 | 1235 | 574 | 761 | 691 | 340 | 6 | 1247 | 1512 | 1021 | 109 | 685 | 42 | 69
sr | 365 | 1145 | 283 | 564 | 123 | 530 | 494 | 3809 | 569 | 345 | 258 | 807 | 51 | 1379 | 571 | 450 | 543 | 211 | 436 | 538 | 3 | 242 | 1023 | 246 | 1341 | 404 | 1460 | 3 | 9041 | -- | 137 | 382 | 1260 | 568 | 775 | 699 | 347 | 10 | 1229 | 1498 | 1009 | 112 | 639 | 45 | 79
si | 62 | 315 | 124 | 151 | 87 | 166 | 150 | 532 | 98 | 125 | 49 | 449 | 13 | 470 | 109 | 77 | 81 | 45 | 129 | 164 | 0 | 156 | 276 | 28 | 231 | 128 | 373 | 2 | 127 | 137 | -- | 137 | 260 | 189 | 348 | 173 | 69 | 7 | 301 | 306 | 510 | 38 | 216 | 5 | 15
so | 309 | 1049 | 367 | 412 | 79 | 323 | 302 | 2141 | 502 | 261 | 391 | 747 | 77 | 1312 | 387 | 584 | 234 | 145 | 270 | 420 | 6 | 304 | 812 | 219 | 602 | 283 | 1166 | 1 | 377 | 382 | 137 | -- | 689 | 1020 | 723 | 384 | 178 | 19 | 968 | 875 | 1000 | 75 | 724 | 20 | 116
es | 351 | 3654 | 704 | 701 | 431 | 2002 | 1873 | 6910 | 990 | 540 | 463 | 2931 | 91 | 4547 | 950 | 607 | 613 | 279 | 603 | 687 | 7 | 766 | 2512 | 314 | 7112 | 585 | 4864 | 3 | 1235 | 1260 | 260 | 689 | -- | 1047 | 1073 | 827 | 469 | 10 | 3645 | 3130 | 3060 | 290 | 2330 | 59 | 133
sw | 378 | 1186 | 539 | 471 | 86 | 412 | 383 | 2701 | 872 | 300 | 568 | 893 | 151 | 1873 | 425 | 1186 | 324 | 150 | 332 | 513 | 5 | 314 | 1002 | 436 | 983 | 357 | 1672 | 1 | 574 | 568 | 189 | 1020 | 1047 | -- | 929 | 492 | 261 | 10 | 1348 | 1309 | 1253 | 90 | 936 | 37 | 219
ta | 390 | 1311 | 515 | 919 | 185 | 511 | 465 | 3156 | 425 | 1762 | 299 | 3711 | 52 | 1886 | 641 | 521 | 541 | 206 | 1776 | 994 | 2 | 441 | 1250 | 232 | 1042 | 1726 | 1628 | 0 | 761 | 775 | 348 | 723 | 1073 | 929 | -- | 2278 | 400 | 14 | 1486 | 1423 | 2404 | 134 | 1092 | 32 | 68
te | 329 | 877 | 245 | 793 | 147 | 353 | 335 | 2121 | 380 | 2066 | 260 | 3762 | 54 | 1131 | 425 | 363 | 452 | 174 | 1886 | 741 | 2 | 314 | 797 | 162 | 820 | 1892 | 892 | 0 | 691 | 699 | 173 | 384 | 827 | 492 | 2278 | -- | 306 | 11 | 893 | 832 | 1748 | 107 | 644 | 21 | 61
th | 124 | 367 | 140 | 245 | 71 | 269 | 250 | 1020 | 185 | 164 | 87 | 378 | 18 | 599 | 305 | 149 | 197 | 109 | 196 | 217 | 1 | 92 | 364 | 85 | 468 | 200 | 595 | 1 | 340 | 347 | 69 | 178 | 469 | 261 | 400 | 306 | -- | 5 | 477 | 480 | 414 | 37 | 357 | 10 | 26
ti | 131 | 27 | 2 | 6 | 4 | 11 | 12 | 58 | 10 | 5 | 9 | 7 | 5 | 9 | 5 | 13 | 5 | 3 | 11 | 7 | 103 | 8 | 8 | 28 | 3 | 10 | 7 | 0 | 6 | 10 | 7 | 19 | 10 | 10 | 14 | 11 | 5 | -- | 8 | 8 | 4 | 2 | 5 | 0 | 6
tr | 435 | 4147 | 1495 | 860 | 449 | 1511 | 1294 | 5676 | 829 | 631 | 519 | 3694 | 91 | 5663 | 1242 | 724 | 680 | 346 | 706 | 930 | 5 | 1049 | 3695 | 287 | 3483 | 643 | 6223 | 6 | 1247 | 1229 | 301 | 968 | 3645 | 1348 | 1486 | 893 | 477 | 8 | -- | 4108 | 4340 | 370 | 2981 | 126 | 130
uk | 345 | 3457 | 1383 | 688 | 350 | 1619 | 1464 | 6562 | 721 | 508 | 400 | 3935 | 83 | 4829 | 1013 | 610 | 616 | 508 | 596 | 741 | 10 | 818 | 3567 | 280 | 4421 | 570 | 22241 | 5 | 1512 | 1498 | 306 | 875 | 3130 | 1309 | 1423 | 832 | 480 | 8 | 4108 | -- | 4290 | 442 | 3017 | 108 | 89
ur | 409 | 4935 | 966 | 1382 | 591 | 1651 | 1444 | 6320 | 766 | 1619 | 526 | 15666 | 61 | 6476 | 797 | 617 | 532 | 270 | 1395 | 1156 | 1 | 2833 | 6752 | 232 | 6759 | 1515 | 5309 | 2 | 1021 | 1009 | 510 | 1000 | 3060 | 1253 | 2404 | 1748 | 414 | 4 | 4340 | 4290 | -- | 389 | 3723 | 72 | 88
uz | 41 | 388 | 199 | 98 | 62 | 176 | 158 | 450 | 76 | 80 | 59 | 352 | 15 | 432 | 49 | 59 | 54 | 113 | 79 | 84 | 4 | 156 | 313 | 18 | 186 | 73 | 809 | 1 | 109 | 112 | 38 | 75 | 290 | 90 | 134 | 107 | 37 | 2 | 370 | 442 | 389 | -- | 257 | 10 | 15
vi | 285 | 2666 | 726 | 527 | 447 | 1858 | 1663 | 4575 | 438 | 450 | 352 | 3738 | 65 | 4810 | 908 | 631 | 530 | 201 | 473 | 719 | 2 | 657 | 3190 | 170 | 3755 | 431 | 3963 | 3 | 685 | 639 | 216 | 724 | 2330 | 936 | 1092 | 644 | 357 | 5 | 2982 | 3017 | 3723 | 257 | -- | 106 | 76
cy | 1 | 38 | 30 | 37 | 4 | 33 | 31 | 2655 | 40 | 21 | 30 | 77 | 6 | 145 | 25 | 20 | 12 | 12 | 16 | 39 | 0 | 7 | 66 | 9 | 110 | 16 | 134 | 36 | 42 | 45 | 5 | 20 | 59 | 37 | 32 | 21 | 10 | 0 | 126 | 108 | 72 | 10 | 106 | -- | 8
yo | 67 | 141 | 42 | 61 | 12 | 39 | 38 | 229 | 159 | 54 | 362 | 79 | 296 | 174 | 33 | 180 | 45 | 23 | 48 | 65 | 7 | 32 | 74 | 462 | 97 | 44 | 125 | 2 | 69 | 79 | 15 | 116 | 133 | 219 | 68 | 61 | 26 | 6 | 130 | 89 | 88 | 15 | 76 | 8 | --
## Dataset Creation
### Curation Rationale
[More information needed](https://github.com/csebuetnlp/CrossSum)
### Source Data
[BBC News](https://www.bbc.co.uk/ws/languages)
#### Initial Data Collection and Normalization
[Detailed in the paper](https://arxiv.org/abs/2112.08804/)
#### Who are the source language producers?
[Detailed in the paper](https://arxiv.org/abs/2112.08804/)
### Annotations
[Detailed in the paper](https://arxiv.org/abs/2112.08804/)
#### Annotation process
[Detailed in the paper](https://arxiv.org/abs/2112.08804/)
#### Who are the annotators?
[Detailed in the paper](https://arxiv.org/abs/2112.08804/)
### Personal and Sensitive Information
[More information needed](https://github.com/csebuetnlp/CrossSum)
## Considerations for Using the Data
### Social Impact of Dataset
[More information needed](https://github.com/csebuetnlp/CrossSum)
### Discussion of Biases
[More information needed](https://github.com/csebuetnlp/CrossSum)
### Other Known Limitations
[More information needed](https://github.com/csebuetnlp/CrossSum)
## Additional Information
### Dataset Curators
[More information needed](https://github.com/csebuetnlp/CrossSum)
### Licensing Information
Contents of this repository are restricted to only non-commercial research purposes under the [Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License (CC BY-NC-SA 4.0)](https://creativecommons.org/licenses/by-nc-sa/4.0/). Copyright of the dataset contents belongs to the original copyright holders.
### Citation Information
If you use any of the datasets, models or code modules, please cite the following paper:
```
@article{hasan2021crosssum,
author = {Tahmid Hasan and Abhik Bhattacharjee and Wasi Uddin Ahmad and Yuan-Fang Li and Yong-bin Kang and Rifat Shahriyar},
title = {CrossSum: Beyond English-Centric Cross-Lingual Abstractive Text Summarization for 1500+ Language Pairs},
journal = {CoRR},
volume = {abs/2112.08804},
year = {2021},
url = {https://arxiv.org/abs/2112.08804},
eprinttype = {arXiv},
eprint = {2112.08804}
}
```
### Contributions
Thanks to [@abhik1505040](https://github.com/abhik1505040) and [@Tahmid](https://github.com/Tahmid04) for adding this dataset. | [
-0.73736172914505,
-0.5127253532409668,
0.22844110429286957,
0.20777980983257294,
-0.3336719870567322,
0.12102769315242767,
-0.12648628652095795,
-0.4766067564487457,
0.728040874004364,
0.2773429751396179,
-0.6564172506332397,
-0.5037317872047424,
-0.6460627317428589,
0.24652495980262756,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
nateraw/imagenet-sketch | nateraw | 2022-05-08T05:41:33Z | 29 | 0 | null | [
"license:mit",
"region:us"
] | 2022-05-08T05:41:33Z | 2022-05-08T05:32:17.000Z | 2022-05-08T05:32:17 | ---
license: mit
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Fhrozen/AudioSet2K22 | Fhrozen | 2023-05-07T23:50:56Z | 29 | 3 | null | [
"task_categories:audio-classification",
"annotations_creators:unknown",
"language_creators:unknown",
"size_categories:100K<n<100M",
"source_datasets:unknown",
"license:cc-by-sa-4.0",
"audio-slot-filling",
"region:us"
] | 2023-05-07T23:50:56Z | 2022-05-09T12:42:09.000Z | 2022-05-09T12:42:09 | ---
annotations_creators:
- unknown
language_creators:
- unknown
license: cc-by-sa-4.0
size_categories:
- 100K<n<100M
source_datasets:
- unknown
task_categories:
- audio-classification
task_ids: []
tags:
- audio-slot-filling
---
# Dataset Card for audioset2022
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [AudioSet Ontology](https://research.google.com/audioset/ontology/index.html)
- **Repository:** [Needs More Information]
- **Paper:** [Audio Set: An ontology and human-labeled dataset for audio events](https://research.google.com/pubs/pub45857.html)
- **Leaderboard:** [Paperswithcode Leaderboard](https://paperswithcode.com/dataset/audioset)
### Dataset Summary
The AudioSet ontology is a collection of sound events organized in a hierarchy. The ontology covers a wide range of everyday sounds, from human and animal sounds, to natural and environmental sounds, to musical and miscellaneous sounds.
**This repository only includes audio files for DCASE 2022 - Task 3**
The included labels are limited to:
- Female speech, woman speaking
- Male speech, man speaking
- Clapping
- Telephone
- Telephone bell ringing
- Ringtone
- Laughter
- Domestic sounds, home sounds
- Vacuum cleaner
- Kettle whistle
- Mechanical fan
- Walk, footsteps
- Door
- Cupboard open or close
- Music
- Background music
- Pop music
- Musical instrument
- Acoustic guitar
- Marimba, xylophone
- Cowbell
- Piano
- Electric piano
- Rattle (instrument)
- Water tap, faucet
- Bell
- Bicycle bell
- Chime
- Knock
### Supported Tasks and Leaderboards
- `audio-classification`: The dataset can be used to train a model for Sound Event Detection/Localization.
**The recordings only includes the single channel audio. For Localization tasks, it will required to apply RIR information**
### Languages
None
## Dataset Structure
### Data Instances
**WIP**
```
{
'file':
}
```
### Data Fields
- file: A path to the downloaded audio file in .mp3 format.
### Data Splits
This dataset only includes audio file from the unbalance train list.
The data comprises two splits: weak labels and strong labels.
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
The dataset was initially downloaded by Nelson Yalta (nelson.yalta@ieee.org).
### Licensing Information
[CC BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0)
### Citation Information
```
@inproceedings{45857,
title = {Audio Set: An ontology and human-labeled dataset for audio events},
author = {Jort F. Gemmeke and Daniel P. W. Ellis and Dylan Freedman and Aren Jansen and Wade Lawrence and R. Channing Moore and Manoj Plakal and Marvin Ritter},
year = {2017},
booktitle = {Proc. IEEE ICASSP 2017},
address = {New Orleans, LA}
}
```
| [
-0.6023609638214111,
-0.3009222447872162,
0.18834376335144043,
0.09849456697702408,
-0.033296555280685425,
-0.15606234967708588,
-0.4476597011089325,
-0.5509189963340759,
0.4067881405353546,
0.5669286251068115,
-1.125267505645752,
-1.0751633644104004,
-0.48501157760620117,
-0.0063904561102... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
AlekseyKorshuk/persona-chat | AlekseyKorshuk | 2022-06-04T21:49:08Z | 29 | 10 | null | [
"region:us"
] | 2022-06-04T21:49:08Z | 2022-06-04T21:48:57.000Z | 2022-06-04T21:48:57 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
sileod/wikimedqa | sileod | 2023-05-16T07:47:46Z | 29 | 6 | null | [
"task_categories:text-classification",
"task_categories:multiple-choice",
"language:en",
"license:apache-2.0",
"medical",
"region:us"
] | 2023-05-16T07:47:46Z | 2022-07-14T15:09:22.000Z | 2022-07-14T15:09:22 | ---
license: apache-2.0
task_categories:
- text-classification
- multiple-choice
language:
- en
tags:
- medical
---
```bib
@article{sileo2023wikimedqa,
title={Generating multiple-choice questions for medical question answering with distractors and cue-masking},
author={Sileo, Damien and Uma, Kanimozhi and Moens, Marie-Francine},
journal={arXiv preprint arXiv:2303.07069 },
year={2023}
}
``` | [
-0.3798050582408905,
-0.7763651609420776,
0.8078392744064331,
0.057904697954654694,
-0.2077837735414505,
-0.29043230414390564,
0.039304427802562714,
-0.5987853407859802,
0.7232807278633118,
0.610243558883667,
-0.7088602781295776,
-0.0876058042049408,
-0.7455105781555176,
0.4513864517211914... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
imodels/compas-recidivism | imodels | 2022-08-13T04:17:29Z | 29 | 1 | null | [
"task_categories:tabular-classification",
"size_categories:1K<n<10K",
"interpretability",
"fairness",
"region:us"
] | 2022-08-13T04:17:29Z | 2022-08-13T03:55:20.000Z | 2022-08-13T03:55:20 | ---
annotations_creators: []
language: []
language_creators: []
license: []
multilinguality: []
pretty_name: compas-recividivsm
size_categories:
- 1K<n<10K
source_datasets: []
tags:
- interpretability
- fairness
task_categories:
- tabular-classification
task_ids: []
---
Port of the compas-recidivism dataset from propublica (github [here](https://github.com/propublica/compas-analysis)). See details there and use carefully, as there are serious known social impacts and biases present in this dataset.
Basic preprocessing done by the [imodels team](https://github.com/csinva/imodels) in [this notebook](https://github.com/csinva/imodels-data/blob/master/notebooks_fetch_data/00_get_datasets_custom.ipynb).
The target is the binary outcome `is_recid`.
### Sample usage
Load the data:
```
from datasets import load_dataset
dataset = load_dataset("imodels/compas-recidivism")
df = pd.DataFrame(dataset['train'])
X = df.drop(columns=['is_recid'])
y = df['is_recid'].values
```
Fit a model:
```
import imodels
import numpy as np
m = imodels.FIGSClassifier(max_rules=5)
m.fit(X, y)
print(m)
```
Evaluate:
```
df_test = pd.DataFrame(dataset['test'])
X_test = df.drop(columns=['is_recid'])
y_test = df['is_recid'].values
print('accuracy', np.mean(m.predict(X_test) == y_test))
``` | [
-0.26446333527565,
-0.37977081537246704,
-0.024382179602980614,
0.2506695091724396,
-0.21192272007465363,
-0.018077077344059944,
0.022471141070127487,
-0.21626736223697662,
0.46518200635910034,
0.5117176175117493,
-0.512025773525238,
-0.4698038399219513,
-0.665065586566925,
0.3148584365844... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
imodels/diabetes-readmission | imodels | 2022-08-14T15:38:59Z | 29 | 1 | null | [
"task_categories:tabular-classification",
"size_categories:100K<n<1M",
"interpretability",
"fairness",
"medicine",
"region:us"
] | 2022-08-14T15:38:59Z | 2022-08-14T15:19:27.000Z | 2022-08-14T15:19:27 | ---
annotations_creators: []
language: []
language_creators: []
license: []
multilinguality: []
pretty_name: diabetes-readmission
size_categories:
- 100K<n<1M
source_datasets: []
tags:
- interpretability
- fairness
- medicine
task_categories:
- tabular-classification
task_ids: []
---
Port of the diabetes-readmission dataset from UCI (link [here](https://archive.ics.uci.edu/ml/datasets/diabetes+130-us+hospitals+for+years+1999-2008)). See details there and use carefully.
Basic preprocessing done by the [imodels team](https://github.com/csinva/imodels) in [this notebook](https://github.com/csinva/imodels-data/blob/master/notebooks_fetch_data/00_get_datasets_custom.ipynb).
The target is the binary outcome `readmitted`.
### Sample usage
Load the data:
```
from datasets import load_dataset
dataset = load_dataset("imodels/diabetes-readmission")
df = pd.DataFrame(dataset['train'])
X = df.drop(columns=['readmitted'])
y = df['readmitted'].values
```
Fit a model:
```
import imodels
import numpy as np
m = imodels.FIGSClassifier(max_rules=5)
m.fit(X, y)
print(m)
```
Evaluate:
```
df_test = pd.DataFrame(dataset['test'])
X_test = df.drop(columns=['readmitted'])
y_test = df['readmitted'].values
print('accuracy', np.mean(m.predict(X_test) == y_test))
``` | [
-0.1673860251903534,
-0.218260258436203,
0.3603472113609314,
0.17823638021945953,
-0.2612933814525604,
-0.14432880282402039,
0.17455576360225677,
-0.26765960454940796,
0.4694598913192749,
0.6375763416290283,
-0.2594652771949768,
-0.6673555374145508,
-0.5577015280723572,
0.5580852031707764,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
npc-engine/light-batch-summarize-dialogue | npc-engine | 2022-08-20T18:18:10Z | 29 | 3 | null | [
"language:en",
"license:mit",
"region:us"
] | 2022-08-20T18:18:10Z | 2022-08-19T17:31:56.000Z | 2022-08-19T17:31:56 | ---
license: mit
language: en
---
# [Light dataset](https://parl.ai/projects/light/) prepared for zero-shot summarization.
Dialogues are preprocessed into a form:
```
<Character name>: <character line>
...
<Character name>: <character line>
Summarize the document
```
| [
-0.22558540105819702,
-0.6560046672821045,
0.4654889404773712,
-0.17928816378116608,
-0.4859430491924286,
0.1912638396024704,
-0.053938232362270355,
0.136702299118042,
0.5639057159423828,
0.8832801580429077,
-0.8534291982650757,
-0.6471978425979614,
-0.061044782400131226,
0.227311313152313... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
affahrizain/jigsaw-toxic-comment | affahrizain | 2023-02-19T11:51:27Z | 29 | 1 | null | [
"region:us"
] | 2023-02-19T11:51:27Z | 2022-09-06T19:36:24.000Z | 2022-09-06T19:36:24 | ---
dataset_info:
features:
- name: labels
dtype: int64
- name: comment_clean
dtype: string
splits:
- name: train
num_bytes: 57080609
num_examples: 159100
- name: dev
num_bytes: 7809213
num_examples: 22393
- name: test
num_bytes: 22245686
num_examples: 63978
download_size: 13050863
dataset_size: 87135508
---
# Dataset Card for "jigsaw-toxic-comment"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.34970223903656006,
-0.33615994453430176,
0.2647537589073181,
0.2156989872455597,
-0.48486563563346863,
0.0215429849922657,
0.40663573145866394,
-0.20558802783489227,
0.7919871807098389,
0.4463501572608948,
-0.7677645683288574,
-0.658272385597229,
-0.6728407144546509,
-0.2014496624469757... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
truongpdd/vietnamese_poetry | truongpdd | 2022-09-23T04:30:49Z | 29 | 2 | null | [
"region:us"
] | 2022-09-23T04:30:49Z | 2022-09-23T04:30:31.000Z | 2022-09-23T04:30:31 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
autoevaluator/benchmark-dummy-data | autoevaluator | 2022-11-18T13:19:56Z | 29 | 0 | null | [
"region:us"
] | 2022-11-18T13:19:56Z | 2022-09-28T07:57:08.000Z | 2022-09-28T07:57:08 | # Dummy Dataset for AutoTrain Benchmark
This dataset contains dummy data that's needed to create AutoTrain projects for benchmarks like [RAFT](https://huggingface.co/spaces/ought/raft-leaderboard). See [here](https://github.com/huggingface/hf_benchmarks) for more details. | [
-0.6615493297576904,
-0.2253274768590927,
0.06285099685192108,
0.47743332386016846,
0.010141273960471153,
0.2590548098087311,
0.4209209382534027,
0.017758145928382874,
0.2351006418466568,
0.20714397728443146,
-1.0102750062942505,
-0.4293537735939026,
-0.2004079967737198,
-0.259091258049011... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
abidlabs/celeb-dataset2 | abidlabs | 2022-10-02T20:00:42Z | 29 | 0 | null | [
"region:us"
] | 2022-10-02T20:00:42Z | 2022-10-02T20:00:38.000Z | 2022-10-02T20:00:38 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
GuiGel/meddocan | GuiGel | 2022-10-07T08:58:07Z | 29 | 0 | null | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:es",
"license:cc-by-4.0",
"clinical",
"pr... | 2022-10-07T08:58:07Z | 2022-10-07T06:31:03.000Z | 2022-10-07T06:31:03 | ---
annotations_creators:
- expert-generated
language:
- es
language_creators:
- expert-generated
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: MEDDOCAN
size_categories:
- 10K<n<100K
source_datasets:
- original
tags:
- clinical
- protected health information
- health records
task_categories:
- token-classification
task_ids:
- named-entity-recognition
---
# Dataset Card for "meddocan"
## Table of Contents
- [Dataset Card for [Dataset Name]](#dataset-card-for-dataset-name)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://temu.bsc.es/meddocan/index.php/datasets/](https://temu.bsc.es/meddocan/index.php/datasets/)
- **Repository:** [https://github.com/PlanTL-GOB-ES/SPACCC_MEDDOCAN](https://github.com/PlanTL-GOB-ES/SPACCC_MEDDOCAN)
- **Paper:** [http://ceur-ws.org/Vol-2421/MEDDOCAN_overview.pdf](http://ceur-ws.org/Vol-2421/MEDDOCAN_overview.pdf)
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
A personal upload of the SPACC_MEDDOCAN corpus. The tokenization is made with the help of a custom [spaCy](https://spacy.io/) pipeline.
### Supported Tasks and Leaderboards
Name Entity Recognition
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Data Fields
The data fields are the same among all splits.
### Data Splits
| name |train|validation|test|
|---------|----:|---------:|---:|
|meddocan|10312|5268|5155|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
From the [SPACCC_MEDDOCAN: Spanish Clinical Case Corpus - Medical Document Anonymization](https://github.com/PlanTL-GOB-ES/SPACCC_MEDDOCAN) page:
> This work is licensed under a Creative Commons Attribution 4.0 International License.
>
> You are free to: Share — copy and redistribute the material in any medium or format Adapt — remix, transform, and build upon the material for any purpose, even commercially. Attribution — You must give appropriate credit, provide a link to the license, and indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use.
>
> For more information, please see https://creativecommons.org/licenses/by/4.0/
### Citation Information
```
@inproceedings{Marimon2019AutomaticDO,
title={Automatic De-identification of Medical Texts in Spanish: the MEDDOCAN Track, Corpus, Guidelines, Methods and Evaluation of Results},
author={Montserrat Marimon and Aitor Gonzalez-Agirre and Ander Intxaurrondo and Heidy Rodriguez and Jose Lopez Martin and Marta Villegas and Martin Krallinger},
booktitle={IberLEF@SEPLN},
year={2019}
}
```
### Contributions
Thanks to [@GuiGel](https://github.com/GuiGel) for adding this dataset. | [
-0.5606368780136108,
-0.5098145008087158,
0.3272188603878021,
0.1974359005689621,
-0.3144146800041199,
0.1039455384016037,
-0.37221789360046387,
-0.42671674489974976,
0.7505372762680054,
0.5782443284988403,
-0.6928957104682922,
-1.0849846601486206,
-0.6589412093162537,
0.26454418897628784,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
anisub/celeb-identities | anisub | 2022-10-09T03:03:29Z | 29 | 0 | null | [
"region:us"
] | 2022-10-09T03:03:29Z | 2022-10-09T03:03:16.000Z | 2022-10-09T03:03:16 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ThankGod/celeb-identities | ThankGod | 2023-04-25T12:00:42Z | 29 | 0 | null | [
"region:us"
] | 2023-04-25T12:00:42Z | 2022-10-09T18:37:35.000Z | 2022-10-09T18:37:35 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': Andrew_Ng
'1': Elon_Musk
'2': Jay_Z
'3': Kanye
'4': Obama
'5': Queen
splits:
- name: train
num_bytes: 624532.0
num_examples: 16
download_size: 626669
dataset_size: 624532.0
---
# Dataset Card for "celeb-identities"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.46353304386138916,
-0.25037872791290283,
0.00394661957398057,
0.09495989233255386,
-0.06635984778404236,
0.3351334035396576,
0.2677997946739197,
-0.30673035979270935,
0.9174419045448303,
0.39497241377830505,
-0.8485506176948547,
-0.641608715057373,
-0.6570073962211609,
-0.26624107360839... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Jfeagans89/celeb-identities | Jfeagans89 | 2022-10-12T05:27:23Z | 29 | 0 | null | [
"region:us"
] | 2022-10-12T05:27:23Z | 2022-10-12T05:06:15.000Z | 2022-10-12T05:06:15 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Gazoche/gundam-captioned | Gazoche | 2022-10-15T01:44:59Z | 29 | 4 | null | [
"task_categories:text-to-image",
"annotations_creators:machine-generated",
"language_creators:other",
"multilinguality:monolingual",
"size_categories:n<2K",
"language:en",
"license:cc-by-nc-sa-4.0",
"region:us"
] | 2022-10-15T01:44:59Z | 2022-10-13T11:51:15.000Z | 2022-10-13T11:51:15 | ---
license: cc-by-nc-sa-4.0
annotations_creators:
- machine-generated
language:
- en
language_creators:
- other
multilinguality:
- monolingual
pretty_name: 'Gundam captioned'
size_categories:
- n<2K
tags: []
task_categories:
- text-to-image
task_ids: []
---
# Dataset Card for captioned Gundam
Scraped from mahq.net (https://www.mahq.net/mecha/gundam/index.htm) and manually cleaned to only keep drawings and "Mobile Suits" (i.e, humanoid-looking machines).
The captions were automatically generated from a generic hardcoded description + the dominant colors as described by [BLIP](https://github.com/salesforce/BLIP). | [
-0.3952111303806305,
-0.1746838092803955,
0.02753169648349285,
0.2151976078748703,
-0.5259189009666443,
0.09573707729578018,
0.49379634857177734,
-0.1388643980026245,
0.41299089789390564,
0.7231647968292236,
-0.9657326936721802,
-0.5098564028739929,
-0.12663130462169647,
0.0715909972786903... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
csebuetnlp/BanglaParaphrase | csebuetnlp | 2022-11-14T15:39:43Z | 29 | 3 | null | [
"task_categories:text2text-generation",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100k<n<1M",
"source_datasets:original",
"language:bn",
"license:cc-by-nc-sa-4.0",
"conditional-text-generation",
"paraphrase-generation",
"arxiv:2210.0... | 2022-11-14T15:39:43Z | 2022-10-13T16:06:21.000Z | 2022-10-13T16:06:21 | ---
annotations_creators:
- found
language_creators:
- found
language:
- bn
license:
- cc-by-nc-sa-4.0
multilinguality:
- monolingual
size_categories:
- 100k<n<1M
source_datasets:
- original
task_categories:
- text2text-generation
task_ids: []
pretty_name: BanglaParaphrase
tags:
- conditional-text-generation
- paraphrase-generation
---
# Dataset Card for "BanglaParaphrase"
## Table of Contents
- [Dataset Card Creation Guide](#dataset-card-creation-guide)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** [https://github.com/csebuetnlp/banglaparaphrase](https://github.com/csebuetnlp/banglaparaphrase)
- **Paper:** [BanglaParaphrase: A High-Quality Bangla Paraphrase Dataset](https://arxiv.org/abs/2210.05109)
- **Point of Contact:** [Najrin Sultana](mailto:nazrinshukti@gmail.com)
### Dataset Summary
We present BanglaParaphrase, a high quality synthetic Bangla paraphrase dataset containing about 466k paraphrase pairs.
The paraphrases ensures high quality by being semantically coherent and syntactically diverse.
### Supported Tasks and Leaderboards
[More information needed](https://github.com/csebuetnlp/banglaparaphrase)
### Languages
- `bengali`
## Loading the dataset
```python
from datasets import load_dataset
from datasets import load_dataset
ds = load_dataset("csebuetnlp/BanglaParaphrase")
```
## Dataset Structure
### Data Instances
One example from the `train` part of the dataset is given below in JSON format.
```
{
"source": "বেশিরভাগ সময় প্রকৃতির দয়ার ওপরেই বেঁচে থাকতেন উপজাতিরা।",
"target": "বেশিরভাগ সময়ই উপজাতিরা প্রকৃতির দয়ার উপর নির্ভরশীল ছিল।"
}
```
### Data Fields
- 'source': A string representing the source sentence.
- 'target': A string representing the target sentence.
### Data Splits
Dataset with train-dev-test example counts are given below:
Language | ISO 639-1 Code | Train | Validation | Test |
-------------- | ---------------- | ------- | ----- | ------ |
Bengali | bn | 419, 967 | 233, 31 | 233, 32 |
## Dataset Creation
### Curation Rationale
[More information needed](https://github.com/csebuetnlp/banglaparaphrase)
### Source Data
[Roar Bangla](https://roar.media/bangla)
#### Initial Data Collection and Normalization
[Detailed in the paper](https://arxiv.org/abs/2210.05109)
#### Who are the source language producers?
[Detailed in the paper](https://arxiv.org/abs/2210.05109)
### Annotations
[Detailed in the paper](https://arxiv.org/abs/2210.05109)
#### Annotation process
[Detailed in the paper](https://arxiv.org/abs/2210.05109)
#### Who are the annotators?
[Detailed in the paper](https://arxiv.org/abs/2210.05109)
### Personal and Sensitive Information
[More information needed](https://github.com/csebuetnlp/banglaparaphrase)
## Considerations for Using the Data
### Social Impact of Dataset
[More information needed](https://github.com/csebuetnlp/banglaparaphrase)
### Discussion of Biases
[More information needed](https://github.com/csebuetnlp/banglaparaphrase)
### Other Known Limitations
[More information needed](https://github.com/csebuetnlp/banglaparaphrase)
## Additional Information
### Dataset Curators
[More information needed](https://github.com/csebuetnlp/banglaparaphrase)
### Licensing Information
Contents of this repository are restricted to only non-commercial research purposes under the [Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License (CC BY-NC-SA 4.0)](https://creativecommons.org/licenses/by-nc-sa/4.0/). Copyright of the dataset contents belongs to the original copyright holders.
### Citation Information
```
@article{akil2022banglaparaphrase,
title={BanglaParaphrase: A High-Quality Bangla Paraphrase Dataset},
author={Akil, Ajwad and Sultana, Najrin and Bhattacharjee, Abhik and Shahriyar, Rifat},
journal={arXiv preprint arXiv:2210.05109},
year={2022}
}
```
### Contributions
| [
-0.14592833817005157,
-0.8385167717933655,
-0.061750948429107666,
0.6591103076934814,
-0.3606366813182831,
0.04705388844013214,
-0.3370038568973541,
-0.19113364815711975,
0.31176239252090454,
0.4508616030216217,
-0.3354034125804901,
-0.7365273833274841,
-0.5457802414894104,
0.5169371366500... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
jinhybr/WildReceipt | jinhybr | 2022-11-06T20:59:01Z | 29 | 0 | null | [
"region:us"
] | 2022-11-06T20:59:01Z | 2022-11-06T20:22:56.000Z | 2022-11-06T20:22:56 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
pacovaldez/stackoverflow-questions | pacovaldez | 2022-11-10T00:14:37Z | 29 | 31 | null | [
"task_categories:text-classification",
"task_ids:multi-class-classification",
"annotations_creators:machine-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"source_datasets:original",
"language:en",
"license:apache-2.0",
"stackoverflow",
"technic... | 2022-11-10T00:14:37Z | 2022-11-09T01:16:19.000Z | 2022-11-09T01:16:19 | ---
annotations_creators:
- machine-generated
language:
- en
language_creators:
- found
license:
- apache-2.0
multilinguality:
- monolingual
pretty_name: stackoverflow_post_questions
size_categories:
- 1M<n<10M
source_datasets:
- original
tags:
- stackoverflow
- technical questions
task_categories:
- text-classification
task_ids:
- multi-class-classification
---
# Dataset Card for [Stackoverflow Post Questions]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Source Data](#source-data)
- [Contributions](#contributions)
## Dataset Description
Companies that sell Open-source software tools usually hire an army of Customer representatives to try to answer every question asked about their tool. The first step in this process
is the prioritization of the question. The classification scale usually consists of 4 values, P0, P1, P2, and P3, with different meanings across every participant in the industry. On
the other hand, every software developer in the world has dealt with Stack Overflow (SO); the amount of shared knowledge there is incomparable to any other website. Questions in SO are
usually annotated and curated by thousands of people, providing metadata about the quality of the question. This dataset aims to provide an accurate prioritization for programming
questions.
### Dataset Summary
The dataset contains the title and body of stackoverflow questions and a label value(0,1,2,3) that was calculated using thresholds defined by SO badges.
### Languages
English
## Dataset Structure
title: string,
body: string,
label: int
### Data Splits
The split is 40/40/20, where classes have been balaned to be around the same size.
## Dataset Creation
The data set was extracted and labeled with the following query in BigQuery:
```
SELECT
title,
body,
CASE
WHEN score >= 100 OR favorite_count >= 100 OR view_count >= 10000 THEN 0
WHEN score >= 25 OR favorite_count >= 25 OR view_count >= 2500 THEN 1
WHEN score >= 10 OR favorite_count >= 10 OR view_count >= 1000 THEN 2
ELSE 3
END AS label
FROM `bigquery-public-data`.stackoverflow.posts_questions
```
### Source Data
The data was extracted from the Big Query public dataset: `bigquery-public-data.stackoverflow.posts_questions`
#### Initial Data Collection and Normalization
The original dataset contained high class imbalance:
label count
0 977424
1 2401534
2 3418179
3 16222990
Grand Total 23020127
The data was sampled from each class to have around the same amount of records on every class.
### Contributions
Thanks to [@pacofvf](https://github.com/pacofvf) for adding this dataset.
| [
-0.9427905082702637,
-0.6233850717544556,
0.13444818556308746,
0.24416834115982056,
-0.2413521260023117,
0.06649365276098251,
-0.08735049515962601,
-0.17303334176540375,
0.3847637176513672,
0.607528030872345,
-0.4429824650287628,
-0.6410249471664429,
-0.6805639863014221,
-0.075287967920303... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
bigbio/bioasq_2021_mesinesp | bigbio | 2022-12-22T15:43:30Z | 29 | 0 | null | [
"multilinguality:monolingual",
"language:es",
"license:cc-by-4.0",
"region:us"
] | 2022-12-22T15:43:30Z | 2022-11-13T22:06:28.000Z | 2022-11-13T22:06:28 |
---
language:
- es
bigbio_language:
- Spanish
license: cc-by-4.0
multilinguality: monolingual
bigbio_license_shortname: CC_BY_4p0
pretty_name: MESINESP 2021
homepage: https://zenodo.org/record/5602914#.YhSXJ5PMKWt
bigbio_pubmed: False
bigbio_public: True
bigbio_tasks:
- TEXT_CLASSIFICATION
---
# Dataset Card for MESINESP 2021
## Dataset Description
- **Homepage:** https://zenodo.org/record/5602914#.YhSXJ5PMKWt
- **Pubmed:** False
- **Public:** True
- **Tasks:** TXTCLASS
The main aim of MESINESP2 is to promote the development of practically relevant semantic indexing tools for biomedical content in non-English language. We have generated a manually annotated corpus, where domain experts have labeled a set of scientific literature, clinical trials, and patent abstracts. All the documents were labeled with DeCS descriptors, which is a structured controlled vocabulary created by BIREME to index scientific publications on BvSalud, the largest database of scientific documents in Spanish, which hosts records from the databases LILACS, MEDLINE, IBECS, among others.
MESINESP track at BioASQ9 explores the efficiency of systems for assigning DeCS to different types of biomedical documents. To that purpose, we have divided the task into three subtracks depending on the document type. Then, for each one we generated an annotated corpus which was provided to participating teams:
- [Subtrack 1 corpus] MESINESP-L – Scientific Literature: It contains all Spanish records from LILACS and IBECS databases at the Virtual Health Library (VHL) with non-empty abstract written in Spanish.
- [Subtrack 2 corpus] MESINESP-T- Clinical Trials contains records from Registro Español de Estudios Clínicos (REEC). REEC doesn't provide documents with the structure title/abstract needed in BioASQ, for that reason we have built artificial abstracts based on the content available in the data crawled using the REEC API.
- [Subtrack 3 corpus] MESINESP-P – Patents: This corpus includes patents in Spanish extracted from Google Patents which have the IPC code “A61P” and “A61K31”. In addition, we also provide a set of complementary data such as: the DeCS terminology file, a silver standard with the participants' predictions to the task background set and the entities of medications, diseases, symptoms and medical procedures extracted from the BSC NERs documents.
## Citation Information
```
@conference {396,
title = {Overview of BioASQ 2021-MESINESP track. Evaluation of
advance hierarchical classification techniques for scientific
literature, patents and clinical trials.},
booktitle = {Proceedings of the 9th BioASQ Workshop
A challenge on large-scale biomedical semantic indexing
and question answering},
year = {2021},
url = {http://ceur-ws.org/Vol-2936/paper-11.pdf},
author = {Gasco, Luis and Nentidis, Anastasios and Krithara, Anastasia
and Estrada-Zavala, Darryl and Toshiyuki Murasaki, Renato and Primo-Pe{\~n}a,
Elena and Bojo-Canales, Cristina and Paliouras, Georgios and Krallinger, Martin}
}
```
| [
-0.03176531568169594,
-0.4273131787776947,
0.44853726029396057,
0.3337862193584442,
-0.4097188711166382,
0.2138657420873642,
0.27051442861557007,
-0.6791841387748718,
0.7330835461616516,
0.3302677571773529,
-0.6495944857597351,
-0.7965601682662964,
-0.6024904847145081,
0.6089022159576416,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
cjvt/si_nli | cjvt | 2023-04-04T08:51:01Z | 29 | 0 | null | [
"task_categories:text-classification",
"task_ids:multi-class-classification",
"task_ids:natural-language-inference",
"annotations_creators:expert-generated",
"language_creators:found",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"language:sl",
"... | 2023-04-04T08:51:01Z | 2022-11-15T08:41:29.000Z | 2022-11-15T08:41:29 | ---
annotations_creators:
- expert-generated
language:
- sl
language_creators:
- found
- expert-generated
license:
- cc-by-nc-sa-4.0
multilinguality:
- monolingual
pretty_name: Slovene natural language inference dataset
size_categories:
- 1K<n<10K
source_datasets: []
tags: []
task_categories:
- text-classification
task_ids:
- multi-class-classification
- natural-language-inference
dataset_info:
- config_name: default
features:
- name: pair_id
dtype: string
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: annotation1
dtype: string
- name: annotator1_id
dtype: string
- name: annotation2
dtype: string
- name: annotator2_id
dtype: string
- name: annotation3
dtype: string
- name: annotator3_id
dtype: string
- name: annotation_final
dtype: string
- name: label
dtype: string
splits:
- name: train
num_bytes: 1352635
num_examples: 4392
- name: validation
num_bytes: 164561
num_examples: 547
- name: test
num_bytes: 246518
num_examples: 998
download_size: 410093
dataset_size: 1763714
- config_name: public
features:
- name: pair_id
dtype: string
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: annotation1
dtype: string
- name: annotator1_id
dtype: string
- name: annotation2
dtype: string
- name: annotator2_id
dtype: string
- name: annotation3
dtype: string
- name: annotator3_id
dtype: string
- name: annotation_final
dtype: string
- name: label
dtype: string
splits:
- name: train
num_bytes: 1352591
num_examples: 4392
- name: validation
num_bytes: 164517
num_examples: 547
- name: test
num_bytes: 246474
num_examples: 998
download_size: 410093
dataset_size: 1763582
- config_name: private
features:
- name: pair_id
dtype: string
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: annotation1
dtype: string
- name: annotator1_id
dtype: string
- name: annotation2
dtype: string
- name: annotator2_id
dtype: string
- name: annotation3
dtype: string
- name: annotator3_id
dtype: string
- name: annotation_final
dtype: string
- name: label
dtype: string
splits:
- name: train
- name: validation
- name: test
download_size: 0
dataset_size: 0
---
# Dataset Card for SI-NLI
### Dataset Summary
SI-NLI (Slovene Natural Language Inference Dataset) contains 5,937 human-created Slovene sentence pairs (premise and hypothesis) that are manually labeled with the labels "entailment", "contradiction", and "neutral". We created the dataset using sentences that appear in the Slovenian reference corpus [ccKres](http://hdl.handle.net/11356/1034). Annotators were tasked to modify the hypothesis in a candidate pair in a way that reflects one of the labels. The dataset is balanced since the annotators created three modifications (entailment, contradiction, neutral) for each candidate sentence pair. The dataset is split into train, validation, and test sets, with sizes of 4,392, 547, and 998.
Only the hypothesis and premise are given in the test set (i.e. no annotations) since SI-NLI is integrated into the Slovene evaluation framework [SloBENCH](https://slobench.cjvt.si/). If you use the dataset to train your models, please consider submitting the test set predictions to SloBENCH to get the evaluation score and see how it compares to others.
If you have access to the private test set (with labels), you can load it instead of the public one via `datasets.load_dataset("cjvt/si_nli", "private", data_dir="<...>")`.
### Supported Tasks and Leaderboards
Natural language inference.
### Languages
Slovenian.
## Dataset Structure
### Data Instances
A sample instance from the dataset:
```
{
'pair_id': 'P0',
'premise': 'Vendar se je anglikanska večina v grofijah na severu otoka (Ulster) na plebiscitu odločila, da ostane v okviru Velike Britanije.',
'hypothesis': 'A na glasovanju o priključitvi ozemlja k Severni Irski so se prebivalci ulsterskih grofij, pretežno anglikanske veroizpovedi, izrekli o obstanku pod okriljem VB.',
'annotation1': 'entailment',
'annotator1_id': 'annotator_C',
'annotation2': 'entailment',
'annotator2_id': 'annotator_A',
'annotation3': '',
'annotator3_id': '',
'annotation_final': 'entailment',
'label': 'entailment'
}
```
### Data Fields
- `pair_id`: string identifier of the pair (`""` in the test set),
- `premise`: premise sentence,
- `hypothesis`: hypothesis sentence,
- `annotation1`: the first annotation (`""` if not available),
- `annotator1_id`: anonymized identifier of the first annotator (`""` if not available),
- `annotation2`: the second annotation (`""` if not available),
- `annotator2_id`: anonymized identifier of the second annotator (`""` if not available),
- `annotation3`: the third annotation (`""` if not available),
- `annotator3_id`: anonymized identifier of the third annotator (`""` if not available),
- `annotation_final`: aggregated annotation where it could be unanimously determined (`""` if not available or an unanimous agreement could not be reached),
- `label`: aggregated annotation: either same as `annotation_final` (in case of agreement), same as `annotation1` (in case of disagreement), or `""` (in the test set). **Note that examples with disagreement are all put in the training set**. This aggregation is just the most simple possibility and the user may instead do something more advanced based on the individual annotations (e.g., learning with disagreement).
\* A small number of examples did not go through the annotation process because they were constructed by the authors when writing the guidelines. The quality of these was therefore checked by the authors. Such examples do not have the individual annotations and the annotator IDs.
## Additional Information
### Dataset Curators
Matej Klemen, Aleš Žagar, Jaka Čibej, Marko Robnik-Šikonja.
### Licensing Information
CC BY-NC-SA 4.0.
### Citation Information
```
@misc{sinli,
title = {Slovene Natural Language Inference Dataset {SI}-{NLI}},
author = {Klemen, Matej and {\v Z}agar, Ale{\v s} and {\v C}ibej, Jaka and Robnik-{\v S}ikonja, Marko},
url = {http://hdl.handle.net/11356/1707},
note = {Slovenian language resource repository {CLARIN}.{SI}},
year = {2022}
}
```
### Contributions
Thanks to [@matejklemen](https://github.com/matejklemen) for adding this dataset. | [
-0.38644903898239136,
-0.6362401843070984,
0.26704585552215576,
0.43193483352661133,
-0.29589977860450745,
-0.38551732897758484,
-0.30623647570610046,
-0.44228702783584595,
0.4379609525203705,
0.6884685754776001,
-0.6593571305274963,
-0.7775662541389465,
-0.622161328792572,
0.3851906657218... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
mrbesher/tr-paraphrase-opensubtitles2018 | mrbesher | 2022-11-15T13:33:12Z | 29 | 1 | null | [
"license:cc-by-4.0",
"region:us"
] | 2022-11-15T13:33:12Z | 2022-11-15T13:18:54.000Z | 2022-11-15T13:18:54 | ---
license: cc-by-4.0
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
deutsche-telekom/ger-backtrans-paraphrase | deutsche-telekom | 2023-06-12T17:46:57Z | 29 | 7 | null | [
"task_categories:sentence-similarity",
"multilinguality:monolingual",
"size_categories:10M<n<100M",
"language:de",
"license:cc-by-sa-4.0",
"arxiv:1907.05791",
"arxiv:2004.09813",
"region:us"
] | 2023-06-12T17:46:57Z | 2022-11-21T19:24:43.000Z | 2022-11-21T19:24:43 | ---
license:
- cc-by-sa-4.0
language:
- de
multilinguality:
- monolingual
size_categories:
- 10M<n<100M
task_categories:
- sentence-similarity
---
# German Backtranslated Paraphrase Dataset
This is a dataset of more than 21 million German paraphrases.
These are text pairs that have the same meaning but are expressed with different words.
The source of the paraphrases are different parallel German / English text corpora.
The English texts were machine translated back into German to obtain the paraphrases.
This dataset can be used for example to train semantic text embeddings.
To do this, for example, [SentenceTransformers](https://www.sbert.net/)
and the [MultipleNegativesRankingLoss](https://www.sbert.net/docs/package_reference/losses.html#multiplenegativesrankingloss)
can be used.
## Maintainers
[](https://www.welove.ai/)
This dataset is open sourced by [Philip May](https://may.la/)
and maintained by the [One Conversation](https://www.welove.ai/)
team of [Deutsche Telekom AG](https://www.telekom.com/).
## Our pre-processing
Apart from the back translation, we have added more columns (for details see below). We have carried out the following pre-processing and filtering:
- We dropped text pairs where one text was longer than 499 characters.
- In the [GlobalVoices v2018q4](https://opus.nlpl.eu/GlobalVoices-v2018q4.php) texts we have removed the `" · Global Voices"` suffix.
## Your post-processing
You probably don't want to use the dataset as it is, but filter it further.
This is what the additional columns of the dataset are for.
For us it has proven useful to delete the following pairs of sentences:
- `min_char_len` less than 15
- `jaccard_similarity` greater than 0.3
- `de_token_count` greater than 30
- `en_de_token_count` greater than 30
- `cos_sim` less than 0.85
## Columns description
- **`uuid`**: a uuid calculated with Python `uuid.uuid4()`
- **`en`**: the original English texts from the corpus
- **`de`**: the original German texts from the corpus
- **`en_de`**: the German texts translated back from English (from `en`)
- **`corpus`**: the name of the corpus
- **`min_char_len`**: the number of characters of the shortest text
- **`jaccard_similarity`**: the [Jaccard similarity coefficient](https://en.wikipedia.org/wiki/Jaccard_index) of both sentences - see below for more details
- **`de_token_count`**: number of tokens of the `de` text, tokenized with [deepset/gbert-large](https://huggingface.co/deepset/gbert-large)
- **`en_de_token_count`**: number of tokens of the `de` text, tokenized with [deepset/gbert-large](https://huggingface.co/deepset/gbert-large)
- **`cos_sim`**: the [cosine similarity](https://en.wikipedia.org/wiki/Cosine_similarity) of both sentences measured with [sentence-transformers/paraphrase-multilingual-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-mpnet-base-v2)
## Anomalies in the texts
It is noticeable that the [OpenSubtitles](https://opus.nlpl.eu/OpenSubtitles-v2018.php) texts have weird dash prefixes. This looks like this:
```
- Hast du was draufgetan?
```
To remove them you could apply this function:
```python
import re
def clean_text(text):
text = re.sub("^[-\s]*", "", text)
text = re.sub("[-\s]*$", "", text)
return text
df["de"] = df["de"].apply(clean_text)
df["en_de"] = df["en_de"].apply(clean_text)
```
## Parallel text corpora used
| Corpus name & link | Number of paraphrases |
|-----------------------------------------------------------------------|----------------------:|
| [OpenSubtitles](https://opus.nlpl.eu/OpenSubtitles-v2018.php) | 18,764,810 |
| [WikiMatrix v1](https://opus.nlpl.eu/WikiMatrix-v1.php) | 1,569,231 |
| [Tatoeba v2022-03-03](https://opus.nlpl.eu/Tatoeba-v2022-03-03.php) | 313,105 |
| [TED2020 v1](https://opus.nlpl.eu/TED2020-v1.php) | 289,374 |
| [News-Commentary v16](https://opus.nlpl.eu/News-Commentary-v16.php) | 285,722 |
| [GlobalVoices v2018q4](https://opus.nlpl.eu/GlobalVoices-v2018q4.php) | 70,547 |
| **sum** |. **21,292,789** |
## Back translation
We have made the back translation from English to German with the help of [Fairseq](https://github.com/facebookresearch/fairseq).
We used the `transformer.wmt19.en-de` model for this purpose:
```python
en2de = torch.hub.load(
"pytorch/fairseq",
"transformer.wmt19.en-de",
checkpoint_file="model1.pt:model2.pt:model3.pt:model4.pt",
tokenizer="moses",
bpe="fastbpe",
)
```
## How the Jaccard similarity was calculated
To calculate the [Jaccard similarity coefficient](https://en.wikipedia.org/wiki/Jaccard_index)
we are using the [SoMaJo tokenizer](https://github.com/tsproisl/SoMaJo)
to split the texts into tokens.
We then `lower()` the tokens so that upper and lower case letters no longer make a difference. Below you can find a code snippet with the details:
```python
from somajo import SoMaJo
LANGUAGE = "de_CMC"
somajo_tokenizer = SoMaJo(LANGUAGE)
def get_token_set(text, somajo_tokenizer):
sentences = somajo_tokenizer.tokenize_text([text])
tokens = [t.text.lower() for sentence in sentences for t in sentence]
token_set = set(tokens)
return token_set
def jaccard_similarity(text1, text2, somajo_tokenizer):
token_set1 = get_token_set(text1, somajo_tokenizer=somajo_tokenizer)
token_set2 = get_token_set(text2, somajo_tokenizer=somajo_tokenizer)
intersection = token_set1.intersection(token_set2)
union = token_set1.union(token_set2)
jaccard_similarity = float(len(intersection)) / len(union)
return jaccard_similarity
```
## Load this dataset
### With Hugging Face Datasets
```python
# pip install datasets
from datasets import load_dataset
dataset = load_dataset("deutsche-telekom/ger-backtrans-paraphrase")
train_dataset = dataset["train"]
```
### With Pandas
If you want to download the csv file and then load it with Pandas you can do it like this:
```python
df = pd.read_csv("train.csv")
```
## Citations, Acknowledgements and Licenses
**OpenSubtitles**
- citation: P. Lison and J. Tiedemann, 2016, [OpenSubtitles2016: Extracting Large Parallel Corpora from Movie and TV Subtitles](http://www.lrec-conf.org/proceedings/lrec2016/pdf/947_Paper.pdf). In Proceedings of the 10th International Conference on Language Resources and Evaluation (LREC 2016)
- also see http://www.opensubtitles.org/
- license: no special license has been provided at OPUS for this dataset
**WikiMatrix v1**
- citation: Holger Schwenk, Vishrav Chaudhary, Shuo Sun, Hongyu Gong and Paco Guzman, [WikiMatrix: Mining 135M Parallel Sentences in 1620 Language Pairs from Wikipedia](https://arxiv.org/abs/1907.05791), arXiv, July 11 2019
- license: [CC-BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/)
**Tatoeba v2022-03-03**
- citation: J. Tiedemann, 2012, [Parallel Data, Tools and Interfaces in OPUS](https://opus.nlpl.eu/Tatoeba-v2022-03-03.php). In Proceedings of the 8th International Conference on Language Resources and Evaluation (LREC 2012)
- license: [CC BY 2.0 FR](https://creativecommons.org/licenses/by/2.0/fr/)
- copyright: https://tatoeba.org/eng/terms_of_use
**TED2020 v1**
- citation: Reimers, Nils and Gurevych, Iryna, [Making Monolingual Sentence Embeddings Multilingual using Knowledge Distillation](https://arxiv.org/abs/2004.09813), In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, November 2020
- acknowledgements to [OPUS](https://opus.nlpl.eu/) for this service
- license: please respect the [TED Talks Usage Policy](https://www.ted.com/about/our-organization/our-policies-terms/ted-talks-usage-policy)
**News-Commentary v16**
- citation: J. Tiedemann, 2012, [Parallel Data, Tools and Interfaces in OPUS](https://opus.nlpl.eu/Tatoeba-v2022-03-03.php). In Proceedings of the 8th International Conference on Language Resources and Evaluation (LREC 2012)
- license: no special license has been provided at OPUS for this dataset
**GlobalVoices v2018q4**
- citation: J. Tiedemann, 2012, [Parallel Data, Tools and Interfaces in OPUS](https://opus.nlpl.eu/Tatoeba-v2022-03-03.php). In Proceedings of the 8th International Conference on Language Resources and Evaluation (LREC 2012)
- license: no special license has been provided at OPUS for this dataset
## Citation
```latex
@misc{ger-backtrans-paraphrase,
title={Deutsche-Telekom/ger-backtrans-paraphrase - dataset at Hugging Face},
url={https://huggingface.co/datasets/deutsche-telekom/ger-backtrans-paraphrase},
year={2022},
author={May, Philip}
}
```
## Licensing
Copyright (c) 2022 [Philip May](https://may.la/),
[Deutsche Telekom AG](https://www.telekom.com/)
This work is licensed under [CC-BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/).
| [
-0.47106456756591797,
-0.7295130491256714,
0.45109108090400696,
0.2546705901622772,
-0.40187424421310425,
-0.25284793972969055,
-0.42527514696121216,
-0.07272378355264664,
0.2874877452850342,
0.5422256588935852,
-0.5294333100318909,
-0.682269811630249,
-0.5265594720840454,
0.41966870427131... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
PublicPrompts/Karsh | PublicPrompts | 2023-01-08T05:15:57Z | 29 | 19 | null | [
"license:openrail++",
"region:us"
] | 2023-01-08T05:15:57Z | 2022-11-28T19:18:32.000Z | 2022-11-28T19:18:32 | ---
license: openrail++
---
Textual Inversion embedding to create portraits in the style of the most famous portrait photographer ever, "Yousuf Karsh"
Trigger word is "karsh"
Example images generated with this prompt template: portrait photo of "character", highly detailed, by karsh










| [
-0.5298956036567688,
-0.38439446687698364,
0.5038474202156067,
-0.1605495810508728,
-0.23322489857673645,
0.24466246366500854,
0.25220611691474915,
-0.5510289072990417,
0.7411743998527527,
0.7269836664199829,
-0.9287956357002258,
-0.6750339269638062,
-0.5408270359039307,
0.3122870326042175... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
rungalileo/MIT_Movies_w_inference | rungalileo | 2022-12-14T03:05:54Z | 29 | 0 | null | [
"region:us"
] | 2022-12-14T03:05:54Z | 2022-12-14T03:05:40.000Z | 2022-12-14T03:05:40 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
irds/trec-robust04 | irds | 2023-01-05T03:52:55Z | 29 | 1 | null | [
"task_categories:text-retrieval",
"region:us"
] | 2023-01-05T03:52:55Z | 2023-01-05T03:52:49.000Z | 2023-01-05T03:52:49 | ---
pretty_name: '`trec-robust04`'
viewer: false
source_datasets: []
task_categories:
- text-retrieval
---
# Dataset Card for `trec-robust04`
The `trec-robust04` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/trec-robust04#trec-robust04).
# Data
This dataset provides:
- `docs` (documents, i.e., the corpus); count=528,155
- `queries` (i.e., topics); count=250
- `qrels`: (relevance assessments); count=311,410
This dataset is used by: [`trec-robust04_fold1`](https://huggingface.co/datasets/irds/trec-robust04_fold1), [`trec-robust04_fold2`](https://huggingface.co/datasets/irds/trec-robust04_fold2), [`trec-robust04_fold3`](https://huggingface.co/datasets/irds/trec-robust04_fold3), [`trec-robust04_fold4`](https://huggingface.co/datasets/irds/trec-robust04_fold4), [`trec-robust04_fold5`](https://huggingface.co/datasets/irds/trec-robust04_fold5)
## Usage
```python
from datasets import load_dataset
docs = load_dataset('irds/trec-robust04', 'docs')
for record in docs:
record # {'doc_id': ..., 'text': ..., 'marked_up_doc': ...}
queries = load_dataset('irds/trec-robust04', 'queries')
for record in queries:
record # {'query_id': ..., 'title': ..., 'description': ..., 'narrative': ...}
qrels = load_dataset('irds/trec-robust04', 'qrels')
for record in qrels:
record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
## Citation Information
```
@inproceedings{Voorhees2004Robust,
title={Overview of the TREC 2004 Robust Retrieval Track},
author={Ellen Voorhees},
booktitle={TREC},
year={2004}
}
```
| [
-0.3224404752254486,
-0.35989853739738464,
0.15230947732925415,
0.04283442720770836,
-0.16410981118679047,
0.11559431999921799,
-0.0013717353576794267,
-0.1488020271062851,
0.30203521251678467,
0.3284369707107544,
-0.596045196056366,
-1.0236403942108154,
-0.3912578225135803,
0.352126210927... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
chromeNLP/quality | chromeNLP | 2023-02-08T04:32:55Z | 29 | 1 | null | [
"license:mit",
"region:us"
] | 2023-02-08T04:32:55Z | 2023-01-26T07:34:36.000Z | 2023-01-26T07:34:36 | ---
license: mit
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
HiTZ/euscrawl | HiTZ | 2023-02-14T19:00:22Z | 29 | 2 | null | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10M<n<100M",
"source_datasets:original",
"language:eu"... | 2023-02-14T19:00:22Z | 2023-02-13T20:13:26.000Z | 2023-02-13T20:13:26 | ---
annotations_creators:
- no-annotation
language:
- eu
language_creators:
- found
license:
- cc
multilinguality:
- monolingual
pretty_name: EusCrawl
size_categories:
- 10M<n<100M
source_datasets:
- original
tags:
- high-quality
- scraping
task_categories:
- text-generation
- fill-mask
task_ids:
- language-modeling
- masked-language-modeling
dataset_info:
features:
- name: id
dtype: int32
- name: title
dtype: string
- name: text
dtype: string
- name: source
dtype: string
- name: license
dtype: string
- name: url
dtype: string
splits:
- name: train
num_bytes: 2314407002
num_examples: 1724544
download_size: 728281801
dataset_size: 2314407002
---
# Dataset Card for EusCrawl
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://ixa.ehu.eus/euscrawl/
- **Repository:**
- **Paper:** https://arxiv.org/abs/2203.08111
- **Leaderboard:**
- **Point of Contact:** a.soroa@ehu.eus
### Dataset Summary
EusCrawl (http://www.ixa.eus/euscrawl/) is a high-quality corpus for
Basque comprising 12.5 million documents and 423 million tokens,
totalling 2.1 GiB of uncompressed text. EusCrawl was built using
ad-hoc scrapers to extract text from 33 Basque websites with
high-quality content, resulting in cleaner text compared to general
purpose approaches.
### Supported Tasks and Leaderboards
EusCrawl is intended for pretraining models for language modeling or masked language modeling.
### Languages
Basque (eu)
## Dataset Structure
### Data Instances
```json
{
"id": 6,
"title": "Herriko enpresa handien eta txikien arteko topaketak egingo dituzte",
"text": "09:30ean hasiko da bilera eta aurkezpena egingo dute Tubacex, JEZ, Envases, Guardian eta Vidrala enpresek. Eskualdeko lantegi motorrekin beste enpresa txikiak eta ertainak egongo dira. Erakunde publikoaren helburua da euren artean ezagutzea eta elkarlana sustatzea.",
"source": "aiaraldea",
"license": "cc-by-sa 3.0",
"url": "https://aiaraldea.eus/laudio/1494603159768-herriko-enpresa-handien-eta-txikien-arteko-topaketak-egingo-dituzte",
}
```
### Data Fields
- "id": example id
- "title": article title
- "text": article text
- "source": article source
- "license": article license
- "url": article url
### Data Splits
The dataset only has one training split because it is intended for pretraining language models.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
We do not claim ownership of any document in the corpus. All documents
we collected were published under a Creative Commons license in their
original website, and the specific variant can be found in the
"license" field of each document. Should you consider
that our data contains material that is owned by you and you would not
like to be reproduced here, please contact Aitor Soroa at
a.soroa@ehu.eus.
### Citation Information
If you use our corpus or models for academic research, please cite the paper in question:
```bibtex
@misc{artetxe2022euscrawl,
title={Does corpus quality really matter for low-resource languages?},
author={Mikel Artetxe, Itziar Aldabe, Rodrigo Agerri,
Olatz Perez-de-Viñaspre, Aitor Soroa},
year={2022},
eprint={2203.08111},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Contributions
Thanks to [@juletx](https://github.com/juletx) for adding this dataset. | [
-0.503929615020752,
-0.4430992305278778,
0.08757758140563965,
0.29112479090690613,
-0.25088927149772644,
0.07446569204330444,
-0.3537893295288086,
-0.5377005338668823,
0.6408626437187195,
0.44042733311653137,
-0.673783540725708,
-0.7223350405693054,
-0.39951062202453613,
0.2868361175060272... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Krystalan/xmediasum | Krystalan | 2023-02-15T13:58:33Z | 29 | 1 | null | [
"task_categories:summarization",
"annotations_creators:expert-generated",
"language_creators:crowdsourced",
"multilinguality:multilingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"language:zh",
"language:de",
"license:cc-by-nc-sa-4.0",
"region:us"
] | 2023-02-15T13:58:33Z | 2023-02-15T08:50:38.000Z | 2023-02-15T08:50:38 | ---
annotations_creators:
- expert-generated
language:
- en
- zh
- de
language_creators:
- crowdsourced
license:
- cc-by-nc-sa-4.0
multilinguality:
- multilingual
pretty_name: xmediasum
size_categories:
- 10K<n<100K
source_datasets:
- original
tags: []
task_categories:
- summarization
task_ids: []
---
# Dataset Card for XMediaSum
### Dataset Summary
We present XMediaSum, a cross-lingual dialogue summarization dataset with 40K English(dialogues)->Chinese(summaries) and 40K English (dialogues)->German(summaries) samples. XMediaSum is created by manually translating the English summaries of MediaSum (a English monolingual dialogue summarization dataset) to both Chinese and German.
- Paper: [ClidSum: A Benchmark Dataset for Cross-Lingual Dialogue Summarization](https://aclanthology.org/2022.emnlp-main.526/) (EMNLP 2022)
- GitHub: https://github.com/krystalan/ClidSum
### Supported Task
- Cross-Lingual Summarization
- Cross-Lingual Dialogue Summarization
### Languages
- source language: English
- target language: Chinese and German
## Dataset Structure
### Data Instances
One example is given below in JSON format:
```json
{
"dialogue": "MADELELEINE BRAND, host: OK, here's some good news on the jobs front for both men and women. A new survey out today from the employment firm Manpower finds that about a quarter of employers will add jobs this summer. That's for adults, but for teenagers this summer's job market is shaping up to be the weakest in more than 50 years.\r\nALEX COHEN, host: So, how do you get your teenage kids not to spend the entire summer glued to the couch? You're about to get some tips from Michelle Singletary. She's Day to Day's personal finance contributor. Hi, Michelle!\r\nMICHELLE SINGLETARY: Hi!\r\nALEX COHEN, host: So why is the summer job market so hard for teens this year?\r\nMICHELLE SINGLETARY: Lot of things going on right now. We've got a tough economy. We've got a lot of college graduates going into the market. We have people who are losing their jobs and taking jobs that would traditionally go to teens, like in restaurants and retailers. And we have a lot of older people holding on to their jobs and not retiring because they can't afford to retire. And that puts teens at the end of the line when it comes to these types of jobs.\r\nALEX COHEN, host: So you've got a teenager at home, a little bit young for the working world just yet, but what would you say to a teenager who's out there hunting around for a job?\r\nMICHELLE SINGLETARY: If you absolutely need a job, keep looking. You know, obviously the types of jobs that teens tend to go for in retail, fast food, you know, they still need people. And oftentimes you know, listen, you may not get the job at the beginning of the summer, but hold on because in late summer, when some of those college students are going back and perhaps some of those people who lost their jobs are finding permanent positions with more pay, you might be able to still get that job. So don't give up, you may spend a month or month and a half without it, but go back to those retailers and those restaurants and those fast food places to see if they still need someone.\r\nALEX COHEN, host: And now I know parents like having the break from providing allowance. But, you know, is - are there reasons maybe not to push your teen towards taking a job?\r\nMICHELLE SINGLETARY: I think it absolutely is. In fact I think too many teens are working and they don't need to work. They're some who absolutely need, they're contributing to their household or they're putting money into their own college fund. But more often than not, what parents do is say you've got to get a job, and then the teens get the job and they spend all the money on clothes and you know videos and iPods and paying their cell phone bills because they don't need a cell phone anyway.\r\nALEX COHEN, host: So it's not going towards the college tuition at all.\r\nMICHELLE SINGLETARY: It is not. It's just disposable income that they're disposing of. And parents are not setting any limits and you know and then the kids get used to the fact that they're using all of their paycheck. That's another bad habit. Because they don't have to pay bills and all, all their income goes through you know this stuff.\r\nMICHELLE SINGLETARY: And when it comes time to get a real job, they're surprised they don't have enough money. And so you know what? You can wait to work. Instead, maybe they can spend the summer volunteering at a charitable organization or you know going back to school and boosting up their math skills or their English skills. We push the teens out into the market too soon, I think for some families.\r\nALEX COHEN, host: But now let's say your kid is working. What tips can parents provide in terms of holding on to that summer money?\r\nMICHELLE SINGLETARY: You know, before they get their job, they need to sit down with them and do a budget. So before they actually work and get that first paycheck I mean, you know, have them draw up a budge where the money is going. And you ought to have some requirements for some of their money. That's right, be a parent.\r\nMICHELLE SINGLETARY: So make them put some of it towards their college fund, if in fact they're headed for college. You know what? Make them put some away, I call it the tax fund, even though they may not have to pay taxes, but to pay for long-term things that they may want. You know, books once they get to college, or maybe they want to get a car, and they can actually pay cash for it, with some of these funds. Don't let them just go out and spend it on movies and stuff. You ought to set some guidelines - this is where you should put the money. And look at their budget.\r\nALEX COHEN, host: Day to Day's personal finance contributor Michelle Singletary. Thank you, Michelle!\r\nMICHELLE SINGLETARY: You're welcome.\r\nALEX COHEN, host: Stay with us. NPR's Day to Day continues.",
"summary": "The tight job market could be bad news for teens seeking summer work. If your teen does find a job, will he or she know how to manage those paychecks? Our personal finance contributor talks with Alex Cohen about ways to help teens find a job.",
"summary_de": "Der angespannte Arbeitsmarkt könnte für Jugendliche, die Sommerarbeit suchen, eine schlechte Nachricht sein. Wenn Ihr Teenager einen Job findet, wird er oder sie wissen, wie er mit diesen Gehaltsschecks umgeht? Unser Mitarbeiter für persönliche Finanzen spricht mit Alex Cohen darüber, wie Teenager bei der Jobsuche unterstützt werden können.",
"summary_zh": "紧张的就业市场对寻找暑期工作的青少年来说可能是个坏消息。如果你的孩子找到了一份工作,他/她懂得怎么管理这些薪水吗?我们的个人理财撰稿人与亚历克斯·科恩谈论如何帮助青少年找到工作。"
},
```
### Data Fields
- 'dialogue': An English dialogue
- 'summary': the original English summary of the corresponding dialogue (provided by MediaSum)
- 'summary_de': the human-translated German summary
- 'summary_zh': the human-translated Chinese summary
### Data Splits
- training set: 20K samples
- validation set: 10K samples
- testing set: 10K samples
## Dataset Creation
Please refer to [our paper](https://aclanthology.org/2022.emnlp-main.526/) for more details.
## Considerations for Using the Data
Please refer to [our paper](https://aclanthology.org/2022.emnlp-main.526/) for more details.
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/krystalan/ClidSum)
### Licensing Information
License: CC BY-NC-SA 4.0
### Citation Information
```
@inproceedings{wang-etal-2022-clidsum,
title = "{C}lid{S}um: A Benchmark Dataset for Cross-Lingual Dialogue Summarization",
author = "Wang, Jiaan and
Meng, Fandong and
Lu, Ziyao and
Zheng, Duo and
Li, Zhixu and
Qu, Jianfeng and
Zhou, Jie",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, United Arab Emirates",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.emnlp-main.526",
pages = "7716--7729",
abstract = "We present ClidSum, a benchmark dataset towards building cross-lingual summarization systems on dialogue documents. It consists of 67k+ dialogue documents and 112k+ annotated summaries in different target languages. Based on the proposed ClidSum, we introduce two benchmark settings for supervised and semi-supervised scenarios, respectively. We then build various baseline systems in different paradigms (pipeline and end-to-end) and conduct extensive experiments on ClidSum to provide deeper analyses. Furthermore, we propose mDialBART which extends mBART via further pre-training, where the multiple objectives help the pre-trained model capture the structural characteristics as well as key content in dialogues and the transformation from source to the target language. Experimental results show the superiority of mDialBART, as an end-to-end model, outperforms strong pipeline models on ClidSum. Finally, we discuss specific challenges that current approaches faced with this task and give multiple promising directions for future research. We have released the dataset and code at https://github.com/krystalan/ClidSum.",
}
```
### Contributions
Thanks to [@krystalan](https://github.com/krystalan) for adding this dataset. | [
-0.3408527970314026,
-0.4115658104419708,
0.22375893592834473,
0.2659901976585388,
-0.203678160905838,
0.01734968274831772,
-0.18411704897880554,
-0.30715179443359375,
0.3687773644924164,
0.41492247581481934,
-0.811976432800293,
-0.4430587589740753,
-0.2530873119831085,
-0.0799974575638771... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
open-source-metrics/issues-external | open-source-metrics | 2023-11-22T19:55:03Z | 29 | 0 | null | [
"region:us"
] | 2023-11-22T19:55:03Z | 2023-03-24T16:20:35.000Z | 2023-03-24T16:20:35 | ---
dataset_info:
features:
- name: dates
dtype: string
- name: type
struct:
- name: authorAssociation
dtype: string
- name: comment
dtype: bool
- name: issue
dtype: bool
splits:
- name: openai_python
num_bytes: 104501
num_examples: 2942
- name: stable_diffusion_webui
num_bytes: 1681962
num_examples: 48514
- name: langchain
num_bytes: 1539721
num_examples: 43232
- name: pytorch
num_bytes: 22349570
num_examples: 590614
- name: tensorflow
num_bytes: 14130923
num_examples: 396925
download_size: 10818493
dataset_size: 39806677
configs:
- config_name: default
data_files:
- split: stable_diffusion_webui
path: data/stable_diffusion_webui-*
- split: langchain
path: data/langchain-*
- split: pytorch
path: data/pytorch-*
- split: tensorflow
path: data/tensorflow-*
---
# Dataset Card for "issues-external"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.60774165391922,
-0.22296462953090668,
0.30911776423454285,
0.3629266023635864,
0.06458310782909393,
-0.08256270736455917,
0.07822435349225998,
-0.4497772753238678,
0.8133813738822937,
0.3117634654045105,
-1.0124337673187256,
-0.45879852771759033,
-0.4981949031352997,
-0.1770901829004287... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
turuta/Multi30k-uk | turuta | 2023-05-04T19:11:45Z | 29 | 3 | null | [
"task_categories:translation",
"task_categories:text-generation",
"size_categories:10K<n<100K",
"language:uk",
"language:en",
"license:unknown",
"common",
"multi30k",
"ukrainian",
"region:us"
] | 2023-05-04T19:11:45Z | 2023-03-29T20:26:58.000Z | 2023-03-29T20:26:58 | ---
license: unknown
task_categories:
- translation
- text-generation
language:
- uk
- en
pretty_name: ukr-multi30k
size_categories:
- 10K<n<100K
tags:
- common
- multi30k
- ukrainian
---
## Dataset Multi30k: English-Ukrainian variation
Multi30K dataset is designed to develop multilingual multimodal researches.
Initially this dataset extends the Flickr30K dataset by adding German translations. The descriptions were collected from a crowdsourcing platform, while the translations were collected from professionally contracted translators.
We present a variation of this dataset manually translated for Ukrainian language.
Paper:
```python
@inproceedings{saichyshyna-etal-2023-extension,
title = "Extension {M}ulti30{K}: Multimodal Dataset for Integrated Vision and Language Research in {U}krainian",
author = "Saichyshyna, Nataliia and
Maksymenko, Daniil and
Turuta, Oleksii and
Yerokhin, Andriy and
Babii, Andrii and
Turuta, Olena",
booktitle = "Proceedings of the Second Ukrainian Natural Language Processing Workshop (UNLP)",
month = may,
year = "2023",
address = "Dubrovnik, Croatia",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.unlp-1.7",
pages = "54--61",
abstract = "We share the results of the project within the well-known Multi30k dataset dedicated to improving machine translation of text from English into Ukrainian. The main task was to manually prepare the dataset and improve the translation of texts. The importance of collecting such datasets for low-resource languages for improving the quality of machine translation has been discussed. We also studied the features of translations of words and sentences with ambiguous meanings.The collection of multimodal datasets is essential for natural language processing tasks because it allows the development of more complex and comprehensive machine learning models that can understand and analyze different types of data. These models can learn from a variety of data types, including images, text, and audio, for more accurate and meaningful results.",
}
``` | [
-0.3907613158226013,
-0.09621895104646683,
0.2141181230545044,
0.1384642869234085,
-0.2989206612110138,
0.21238616108894348,
-0.3294563591480255,
-0.37714800238609314,
0.01665610633790493,
0.39367735385894775,
-0.7867377996444702,
-0.5368417501449585,
-0.38945621252059937,
0.40780538320541... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
mstz/madelon | mstz | 2023-04-16T17:34:04Z | 29 | 0 | null | [
"task_categories:tabular-classification",
"size_categories:1K<n<10K",
"language:en",
"license:cc",
"madelon",
"tabular_classification",
"UCI",
"region:us"
] | 2023-04-16T17:34:04Z | 2023-03-31T12:35:05.000Z | 2023-03-31T12:35:05 | ---
language:
- en
tags:
- madelon
- tabular_classification
- UCI
pretty_name: Madelon
size_categories:
- 1K<n<10K
task_categories:
- tabular-classification
configs:
- Madelon
license: cc
---
# Annealing
The [Madelon dataset](https://archive-beta.ics.uci.edu/dataset/171/madelon) from the [UCI ML repository](https://archive.ics.uci.edu/ml/datasets).
Artificial dataset with continuous input variables.
Highly non-linear classification problem.
# Configurations and tasks
| **Configuration** | **Task** | **Description** |
|-------------------|---------------------------|-----------------------------------------------------------------|
| madelon | Binary classification | |
# Usage
```python
from datasets import load_dataset
dataset = load_dataset("mstz/madelon")["train"]
``` | [
-0.46921470761299133,
-0.6225835680961609,
0.20748719573020935,
0.6147862672805786,
-0.06586451828479767,
-0.3740117847919464,
-0.3990408480167389,
-0.23419438302516937,
0.21185314655303955,
0.3223903179168701,
-0.4920845925807953,
-0.3819723129272461,
-0.700517475605011,
0.450967460870742... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
mstz/blood | mstz | 2023-04-15T11:37:04Z | 29 | 0 | null | [
"task_categories:tabular-classification",
"size_categories:n<1K",
"language:en",
"license:cc",
"blood",
"tabular_classification",
"binary_classification",
"multiclass_classification",
"UCI",
"region:us"
] | 2023-04-15T11:37:04Z | 2023-04-05T20:51:24.000Z | 2023-04-05T20:51:24 | ---
language:
- en
tags:
- blood
- tabular_classification
- binary_classification
- multiclass_classification
- UCI
pretty_name: Blood Transfusion
size_categories:
- n<1K
task_categories:
- tabular-classification
configs:
- blood
license: cc
---
# Blood
The [Blood Transfusion dataset](https://archive-beta.ics.uci.edu/dataset/176/blood+transfusion+service+center) from the [UCI ML repository](https://archive.ics.uci.edu/ml/datasets).
Census dataset including personal characteristic of a person, and their income threshold.
# Configurations and tasks
| **Configuration** | **Task** | **Description** |
|-------------------|---------------------------|---------------------------------------------------------------|
| blood | Binary classification | Has the person donated blood in the past month? |
# Usage
```python
from datasets import load_dataset
dataset = load_dataset("mstz/blood")["train"]
``` | [
-0.1546880602836609,
0.045017339289188385,
0.1171894520521164,
0.14597740769386292,
-0.4502101242542267,
0.08050020784139633,
0.26679667830467224,
-0.1988808661699295,
0.46430081129074097,
0.6744693517684937,
-0.476146936416626,
-0.5151739716529846,
-0.7175394892692566,
0.5297219157218933,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
mstz/pima | mstz | 2023-04-16T17:57:48Z | 29 | 0 | null | [
"task_categories:tabular-classification",
"size_categories:1K<n<10K",
"language:en",
"license:cc",
"pima",
"tabular_classification",
"binary_classification",
"UCI",
"region:us"
] | 2023-04-16T17:57:48Z | 2023-04-06T22:15:13.000Z | 2023-04-06T22:15:13 | ---
language:
- en
tags:
- pima
- tabular_classification
- binary_classification
- UCI
pretty_name: Ozone
size_categories:
- 1K<n<10K
task_categories:
- tabular-classification
configs:
- pima
license: cc
---
# pima
The [pima dataset](https://archive.ics.uci.edu/ml/datasets/Ozone) from the [UCI ML repository](https://archive.ics.uci.edu/ml/datasets).
Predict diabetes of a patient.
# Configurations and tasks
| **Configuration** | **Task** | **Description** |
|-------------------|---------------------------|-------------------------|
| pima | Binary classification | Does the patient have diabetes?|
# Usage
```python
from datasets import load_dataset
dataset = load_dataset("mstz/pima")["train"]
``` | [
-0.29626086354255676,
-0.5399395227432251,
0.5614867806434631,
0.08619079738855362,
-0.025319911539554596,
-0.4041554629802704,
-0.06785082072019577,
-0.09662257134914398,
0.24328231811523438,
0.7082295417785645,
-0.1805761158466339,
-0.7980391979217529,
-1.091127634048462,
0.4182702302932... | null | null | null | null | null | null | null | null | null | null | null | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.