id stringlengths 2 115 | lastModified stringlengths 24 24 | tags list | author stringlengths 2 42 ⌀ | description stringlengths 0 6.67k ⌀ | citation stringlengths 0 10.7k ⌀ | likes int64 0 3.66k | downloads int64 0 8.89M | created timestamp[us] | card stringlengths 11 977k | card_len int64 11 977k | embeddings list |
|---|---|---|---|---|---|---|---|---|---|---|---|
communityai/apt_pretrain_textbook_16k | 2023-10-31T08:33:56.000Z | [
"region:us"
] | communityai | null | null | 0 | 4 | 2023-10-31T08:30:05 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 11835066870.0
num_examples: 116387
download_size: 5987982663
dataset_size: 11835066870.0
---
# Dataset Card for "apt_pretrain_textbook_16k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 384 | [
[
-0.044769287109375,
-0.00804901123046875,
0.01314544677734375,
0.0038433074951171875,
-0.036865234375,
-0.0138702392578125,
0.01064300537109375,
0.0003209114074707031,
0.036102294921875,
0.0196075439453125,
-0.045166015625,
-0.057464599609375,
-0.031646728515625... |
mcmanaman/autotrain-data-34qj-8hnp-las6 | 2023-10-31T12:25:24.000Z | [
"region:us"
] | mcmanaman | null | null | 0 | 4 | 2023-10-31T12:25:22 | ---
dataset_info:
features:
- name: Target
dtype: string
- name: autotrain_text
dtype: string
splits:
- name: train
num_bytes: 1029
num_examples: 30
- name: validation
num_bytes: 1029
num_examples: 30
download_size: 4432
dataset_size: 2058
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
---
# Dataset Card for "autotrain-data-34qj-8hnp-las6"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 603 | [
[
-0.034820556640625,
0.0230712890625,
0.00502777099609375,
0.0178680419921875,
-0.0013532638549804688,
0.006122589111328125,
0.0311279296875,
-0.007289886474609375,
0.03546142578125,
0.016082763671875,
-0.0657958984375,
-0.031494140625,
-0.0287933349609375,
-... |
Talelaw/fange | 2023-10-31T17:25:21.000Z | [
"region:us"
] | Talelaw | null | null | 0 | 4 | 2023-10-31T17:17:08 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
unaidedelf87777/super-instruct-v0.2 | 2023-11-01T02:21:47.000Z | [
"region:us"
] | unaidedelf87777 | null | null | 0 | 4 | 2023-10-31T19:06:44 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
splits:
- name: train
num_bytes: 911853252
num_examples: 606322
download_size: 0
dataset_size: 911853252
---
# Dataset Card for "super-instruct-v0.2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 523 | [
[
-0.039886474609375,
-0.0091552734375,
0.005519866943359375,
-0.00189971923828125,
-0.0158538818359375,
-0.0089569091796875,
0.028564453125,
-0.0157623291015625,
0.05474853515625,
0.034912109375,
-0.069091796875,
-0.034423828125,
-0.035858154296875,
-0.029663... |
NTCong/en-translation | 2023-11-02T18:43:01.000Z | [
"region:us"
] | NTCong | null | null | 0 | 4 | 2023-11-01T03:48:18 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
Skywork/ChineseDomainModelingEval | 2023-11-02T03:51:43.000Z | [
"license:other",
"arxiv:2310.19341",
"region:us"
] | Skywork | null | null | 1 | 4 | 2023-11-01T04:35:36 | ---
license: other
license_name: license
license_link: >-
https://github.com/SkyworkAI/Skywork/blob/main/Skywork%20Community%20License.pdf
---
# 数据介绍(Introduction)
Skywork/ChineseDomainModelingEval是中文领域建模能力评测数据集,我们对多个领域筛选出2023年9月份-2023年10月份新发布的几百到上千篇高质量文章,并人工进行了核对。测试数据的来源也足够广泛,质量也高。我们可以选取当前最新的文章评测不同模型的Perplexity,模型很难作弊。并且我们会持续按照最新数据评测各个模型效果,动态更新各个模型能力。
# 文件介绍(File Introduction)
- zh_finance.jsonl为金融领域评估数据
- zh_game.jsonl为游戏领域评估数据
- zh_government.jsonl为政务领域评估数据
- zh_movie.jsonl为电影领域评估数据
- zh_tech.jsonl为技术领域评估数据
- zh_general.jsonl为综合领域评估数据
# 协议(License Agreement)
The community usage of SkyPile dataset requires Skywork Community License. The SkyPile dataset supports commercial use. If you plan to use the Skywork model or its derivatives for commercial purposes, you must abide by terms and conditions within Skywork Community License as well as Apache2.0.
# 引用(Contact Us and Citation)
If you find our work helpful, please feel free to cite our paper~
```
@misc{wei2023skywork,
title={Skywork: A More Open Bilingual Foundation Model},
author={Tianwen Wei and Liang Zhao and Lichang Zhang and Bo Zhu and Lijie Wang and Haihua Yang and Biye Li and Cheng Cheng and Weiwei Lü and Rui Hu and Chenxia Li and Liu Yang and Xilin Luo and Xuejie Wu and Lunan Liu and Wenjun Cheng and Peng Cheng and Jianhao Zhang and Xiaoyu Zhang and Lei Lin and Xiaokun Wang and Yutuan Ma and Chuanhai Dong and Yanqi Sun and Yifu Chen and Yongyi Peng and Xiaojuan Liang and Shuicheng Yan and Han Fang and Yahui Zhou},
year={2023},
eprint={2310.19341},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | 1,625 | [
[
-0.01557159423828125,
-0.037567138671875,
-0.00559234619140625,
0.03997802734375,
-0.0264739990234375,
-0.0190887451171875,
-0.0011434555053710938,
-0.0290374755859375,
-0.0013704299926757812,
0.041015625,
-0.031005859375,
-0.03143310546875,
-0.0462646484375,
... |
Shishir1807/llama_drugs | 2023-11-01T10:04:04.000Z | [
"region:us"
] | Shishir1807 | null | null | 0 | 4 | 2023-11-01T10:02:11 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
aminlouhichi/donutTOPSOLIDTIMCOD2 | 2023-11-01T14:05:24.000Z | [
"region:us"
] | aminlouhichi | null | null | 0 | 4 | 2023-11-01T14:05:21 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: image
dtype: image
- name: ground_truth
dtype: string
splits:
- name: train
num_bytes: 9934958.0
num_examples: 46
- name: validation
num_bytes: 9934958.0
num_examples: 46
- name: test
num_bytes: 9934958.0
num_examples: 46
download_size: 27390966
dataset_size: 29804874.0
---
# Dataset Card for "donutTOPSOLIDTIMCOD2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 709 | [
[
-0.022918701171875,
-0.018646240234375,
0.01812744140625,
0.0115966796875,
-0.00868988037109375,
0.01097869873046875,
0.01477813720703125,
-0.0022182464599609375,
0.051727294921875,
0.0435791015625,
-0.05859375,
-0.036834716796875,
-0.055328369140625,
-0.031... |
VoidZeroe/llama7train | 2023-11-02T01:53:14.000Z | [
"region:us"
] | VoidZeroe | null | null | 0 | 4 | 2023-11-02T01:53:05 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 385849
num_examples: 1528
download_size: 101162
dataset_size: 385849
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "llama7train"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 436 | [
[
-0.030792236328125,
0.0099334716796875,
0.0195465087890625,
0.0258331298828125,
-0.031646728515625,
0.00977325439453125,
0.025360107421875,
-0.0200653076171875,
0.056976318359375,
0.0248870849609375,
-0.058624267578125,
-0.05224609375,
-0.047210693359375,
-0... |
diegomiranda/teste-dataset-3 | 2023-11-02T05:55:07.000Z | [
"license:apache-2.0",
"region:us"
] | diegomiranda | null | @article{bender2023learning,
title={Learning to Taste: A Multimodal Wine Dataset},
author={Bender, Thoranna and S{\o}rensen, Simon M{\o}e and Kashani, Alireza and Hjorleifsson, K Eldjarn and Hyldig, Grethe and Hauberg, S{\o}ren and Belongie, Serge and Warburg, Frederik},
journal={arXiv preprint arXiv:2308.16900},
year={2023}
} | 0 | 4 | 2023-11-02T05:34:52 | ---
license: apache-2.0
---
TESTE
| 35 | [
[
-0.022308349609375,
-0.00930023193359375,
0.02264404296875,
-0.0029506683349609375,
0.0004012584686279297,
0.010101318359375,
0.0029544830322265625,
-0.0195159912109375,
0.036285400390625,
0.03643798828125,
-0.025360107421875,
-0.0248260498046875,
-0.04891967773... |
re2panda/click_bate_1000_train_test | 2023-11-02T10:17:59.000Z | [
"region:us"
] | re2panda | null | null | 0 | 4 | 2023-11-02T10:17:24 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
marcus2000/dataset4sentinement_HSE3 | 2023-11-02T11:32:05.000Z | [
"region:us"
] | marcus2000 | null | null | 0 | 4 | 2023-11-02T11:31:58 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: text
dtype: string
- name: sentiment
dtype: string
splits:
- name: train
num_bytes: 17304904.8
num_examples: 13662
- name: test
num_bytes: 1922767.2
num_examples: 1518
download_size: 9942465
dataset_size: 19227672.0
---
# Dataset Card for "dataset4sentinement_HSE3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 598 | [
[
-0.035491943359375,
0.0034770965576171875,
0.031768798828125,
0.0182037353515625,
0.000591278076171875,
-0.005680084228515625,
0.0333251953125,
-0.006011962890625,
0.047271728515625,
0.03759765625,
-0.0538330078125,
-0.058502197265625,
-0.032623291015625,
0.... |
spneshaei/czech_en | 2023-11-02T13:40:04.000Z | [
"region:us"
] | spneshaei | null | null | 0 | 4 | 2023-11-02T13:31:18 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
distil-whisper/librispeech_long | 2023-11-02T14:22:54.000Z | [
"region:us"
] | distil-whisper | null | null | 0 | 4 | 2023-11-02T14:22:51 | ---
dataset_info:
config_name: clean
features:
- name: audio
dtype: audio
splits:
- name: validation
num_bytes: 1998609.0
num_examples: 1
download_size: 1984721
dataset_size: 1998609.0
configs:
- config_name: clean
data_files:
- split: validation
path: clean/validation-*
---
# Dataset Card for "librispeech_long"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 480 | [
[
-0.044830322265625,
-0.01885986328125,
0.0248260498046875,
0.0235595703125,
-0.0253143310546875,
-0.01053619384765625,
-0.000017464160919189453,
-0.0273895263671875,
0.07623291015625,
0.038360595703125,
-0.05657958984375,
-0.053619384765625,
-0.032073974609375,
... |
ambushburn/human-burn-three | 2023-11-02T14:46:29.000Z | [
"region:us"
] | ambushburn | null | null | 0 | 4 | 2023-11-02T14:44:57 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
Harveenchadha/indic-voice | 2023-02-26T11:42:49.000Z | [
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:multilingual",
"language:hi",
"language:mr",
"language:or",
"language:ta",
"language:te",
"language:gu",
"robust-speech-event",
"region:us"
] | Harveenchadha | null | null | 2 | 3 | 2022-03-02T23:29:22 |
---
pretty_name: Indic Voice
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language:
- hi
- mr
- or
- ta
- te
- gu
multilinguality:
- multilingual
task_categories:
- speech-processing
task_ids:
- automatic-speech-recognition
tags:
- robust-speech-event
---
# Statistics
| Language | Source | Type | Duration (in hrs) | Files | Sample rate |
|----------|------------------|-------|----------|-------|--|
| Gujarati | Interspeech 2021 | Train | 39.999 | 22807 | 16000 |
| Gujarati | Interspeech 2021 | Valid | 5 | 3075 | 16000 |
| Gujarati | Interspeech 2021 | Test | 5.25 | 3419 | 8000 |
| Hindi | Interspeech 2021 | Train | 95.05 | 99925 | 16000 |
| Hindi | Interspeech 2021 | Valid | 5.55 | 3843 | 16000 |
| Hindi | Interspeech 2021 | Test | 5.49 | 3897 | 8000 |
| Marathi | Interspeech 2021 | Train | 93.89 | 79432 | 16000 |
| Marathi | Interspeech 2021 | Valid | 5 | 4675 | 16000 |
| Marathi | Interspeech 2021 | Test | 0.667 | 636 | 8000 |
| Odia | Interspeech 2021 | Train | 94.5 | 59782 | 16000 |
| Odia | Interspeech 2021 | Valid | 5.49 | 3471 | 16000 |
| Odia | Interspeech 2021 | Test | 5.49 | 4420 | 8000 |
| Tamil | Interspeech 2021 | Train | 39.98 | 39119 | 16000 |
| Tamil | Interspeech 2021 | Valid | 5 | 3081 | 16000 |
| Tamil | Interspeech 2021 | Test | 4.41 | 2609 | 8000 |
| Telugu | Interspeech 2021 | Train | 39.99 | 44874 | 16000 |
| Telugu | Interspeech 2021 | Valid | 4.99 | 3033 | 16000 |
| Telugu | Interspeech 2021 | Test | 4.39 | 2549 | 8000 | | 1,890 | [
[
-0.014739990234375,
-0.0212554931640625,
0.0096435546875,
0.06268310546875,
-0.044281005859375,
-0.0010232925415039062,
0.0086822509765625,
0.00910186767578125,
0.03753662109375,
0.00714874267578125,
-0.04071044921875,
-0.034881591796875,
-0.053131103515625,
... |
lvwerra/abc | 2022-02-21T10:29:18.000Z | [
"region:us"
] | lvwerra | null | null | 0 | 3 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
lysandre/image-to-text | 2022-04-28T14:19:26.000Z | [
"region:us"
] | lysandre | null | null | 1 | 3 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
midas/ldke3k_small | 2021-11-20T02:40:14.000Z | [
"region:us"
] | midas | null | null | 0 | 3 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
midas/semeval2017_ke_tagged | 2021-11-14T23:49:05.000Z | [
"region:us"
] | midas | null | null | 0 | 3 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
ml6team/xsum_nl | 2022-10-22T14:47:41.000Z | [
"annotations_creators:machine-generated",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:extended|xsum",
"language:nl",
"license:unknown",
"region:us"
] | ml6team | null | null | 2 | 3 | 2022-03-02T23:29:22 | ---
annotations_creators:
- machine-generated
language_creators:
- machine-generated
language:
- nl
language_bcp47:
- nl-BE
license:
- unknown
multilinguality:
- monolingual
pretty_name: XSum NL
size_categories:
- unknown
source_datasets:
- extended|xsum
task_categories:
- conditional-text-generation
task_ids:
- summarization
---
# Dataset Card for XSum NL
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset is a machine translated dataset. It's the [XSum dataset](https://huggingface.co/datasets/xsum) translated with [this model](https://huggingface.co/Helsinki-NLP/opus-mt-en-nl) from English to Dutch.
See the [Hugginface page of the original dataset](https://huggingface.co/datasets/xsum) for more information on the format of this dataset.
Use with:
```python
from datasets import load_dataset
load_dataset("csv", "ml6team/xsum_nl")
```
### Languages
Dutch
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
- `id`: BBC ID of the article.
- `document`: a string containing the body of the news article
- `summary`: a string containing a one sentence summary of the article.
### Data Splits
- `train`
- `test`
- `validation`
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset. | 3,296 | [
[
-0.0207977294921875,
-0.0230865478515625,
0.006244659423828125,
0.0128173828125,
-0.0160064697265625,
0.003826141357421875,
-0.02166748046875,
-0.030914306640625,
0.05926513671875,
0.038604736328125,
-0.059722900390625,
-0.062408447265625,
-0.0528564453125,
... |
mozilla-foundation/common_voice_5_1 | 2023-07-29T16:00:04.000Z | [
"task_categories:automatic-speech-recognition",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:multilingual",
"source_datasets:extended|common_voice",
"license:cc0-1.0",
"arxiv:1912.06670",
"region:us"
] | mozilla-foundation | null | @inproceedings{commonvoice:2020,
author = {Ardila, R. and Branson, M. and Davis, K. and Henretty, M. and Kohler, M. and Meyer, J. and Morais, R. and Saunders, L. and Tyers, F. M. and Weber, G.},
title = {Common Voice: A Massively-Multilingual Speech Corpus},
booktitle = {Proceedings of the 12th Conference on Language Resources and Evaluation (LREC 2020)},
pages = {4211--4215},
year = 2020
} | 0 | 3 | 2022-03-02T23:29:22 | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
license:
- cc0-1.0
multilinguality:
- multilingual
size_categories:
ab:
- n<1K
ar:
- 10K<n<100K
as:
- n<1K
br:
- 10K<n<100K
ca:
- 100K<n<1M
cnh:
- 1K<n<10K
cs:
- 10K<n<100K
cv:
- 1K<n<10K
cy:
- 10K<n<100K
de:
- 100K<n<1M
dv:
- 1K<n<10K
el:
- 10K<n<100K
en:
- 1M<n<10M
eo:
- 10K<n<100K
es:
- 100K<n<1M
et:
- 10K<n<100K
eu:
- 10K<n<100K
fa:
- 100K<n<1M
fr:
- 100K<n<1M
fy-NL:
- 10K<n<100K
ga-IE:
- 1K<n<10K
hsb:
- 1K<n<10K
ia:
- 1K<n<10K
id:
- 10K<n<100K
it:
- 100K<n<1M
ja:
- 1K<n<10K
ka:
- 1K<n<10K
kab:
- 100K<n<1M
ky:
- 10K<n<100K
lv:
- 1K<n<10K
mn:
- 10K<n<100K
mt:
- 10K<n<100K
nl:
- 10K<n<100K
or:
- 1K<n<10K
pa-IN:
- n<1K
pl:
- 100K<n<1M
pt:
- 10K<n<100K
rm-sursilv:
- 1K<n<10K
rm-vallader:
- 1K<n<10K
ro:
- 1K<n<10K
ru:
- 10K<n<100K
rw:
- 100K<n<1M
sah:
- 1K<n<10K
sl:
- 1K<n<10K
sv-SE:
- 10K<n<100K
ta:
- 10K<n<100K
tr:
- 10K<n<100K
tt:
- 10K<n<100K
uk:
- 10K<n<100K
vi:
- n<1K
vot:
- n<1K
zh-CN:
- 10K<n<100K
zh-HK:
- 10K<n<100K
zh-TW:
- 10K<n<100K
source_datasets:
- extended|common_voice
paperswithcode_id: common-voice
pretty_name: Common Voice Corpus 5.1
language_bcp47:
- ab
- ar
- as
- br
- ca
- cnh
- cs
- cv
- cy
- de
- dv
- el
- en
- eo
- es
- et
- eu
- fa
- fr
- fy-NL
- ga-IE
- hsb
- ia
- id
- it
- ja
- ka
- kab
- ky
- lv
- mn
- mt
- nl
- or
- pa-IN
- pl
- pt
- rm-sursilv
- rm-vallader
- ro
- ru
- rw
- sah
- sl
- sv-SE
- ta
- tr
- tt
- uk
- vi
- vot
- zh-CN
- zh-HK
- zh-TW
extra_gated_prompt: By clicking on “Access repository” below, you also agree to not
attempt to determine the identity of speakers in the Common Voice dataset.
task_categories:
- automatic-speech-recognition
---
# Dataset Card for Common Voice Corpus 5.1
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://commonvoice.mozilla.org/en/datasets
- **Repository:** https://github.com/common-voice/common-voice
- **Paper:** https://arxiv.org/abs/1912.06670
- **Leaderboard:** https://paperswithcode.com/dataset/common-voice
- **Point of Contact:** [Anton Lozhkov](mailto:anton@huggingface.co)
### Dataset Summary
The Common Voice dataset consists of a unique MP3 and corresponding text file.
Many of the 7226 recorded hours in the dataset also include demographic metadata like age, sex, and accent
that can help improve the accuracy of speech recognition engines.
The dataset currently consists of 5671 validated hours in 54 languages, but more voices and languages are always added.
Take a look at the [Languages](https://commonvoice.mozilla.org/en/languages) page to request a language or start contributing.
### Supported Tasks and Leaderboards
The results for models trained on the Common Voice datasets are available via the
[🤗 Speech Bench](https://huggingface.co/spaces/huggingface/hf-speech-bench)
### Languages
```
Abkhaz, Arabic, Assamese, Basque, Breton, Catalan, Chinese (China), Chinese (Hong Kong), Chinese (Taiwan), Chuvash, Czech, Dhivehi, Dutch, English, Esperanto, Estonian, French, Frisian, Georgian, German, Greek, Hakha Chin, Indonesian, Interlingua, Irish, Italian, Japanese, Kabyle, Kinyarwanda, Kyrgyz, Latvian, Maltese, Mongolian, Odia, Persian, Polish, Portuguese, Punjabi, Romanian, Romansh Sursilvan, Romansh Vallader, Russian, Sakha, Slovenian, Sorbian, Upper, Spanish, Swedish, Tamil, Tatar, Turkish, Ukrainian, Vietnamese, Votic, Welsh
```
## Dataset Structure
### Data Instances
A typical data point comprises the `path` to the audio file and its `sentence`.
Additional fields include `accent`, `age`, `client_id`, `up_votes`, `down_votes`, `gender`, `locale` and `segment`.
```python
{
'client_id': 'd59478fbc1ee646a28a3c652a119379939123784d99131b865a89f8b21c81f69276c48bd574b81267d9d1a77b83b43e6d475a6cfc79c232ddbca946ae9c7afc5',
'path': 'et/clips/common_voice_et_18318995.mp3',
'audio': {
'path': 'et/clips/common_voice_et_18318995.mp3',
'array': array([-0.00048828, -0.00018311, -0.00137329, ..., 0.00079346, 0.00091553, 0.00085449], dtype=float32),
'sampling_rate': 48000
},
'sentence': 'Tasub kokku saada inimestega, keda tunned juba ammust ajast saati.',
'up_votes': 2,
'down_votes': 0,
'age': 'twenties',
'gender': 'male',
'accent': '',
'locale': 'et',
'segment': ''
}
```
### Data Fields
`client_id` (`string`): An id for which client (voice) made the recording
`path` (`string`): The path to the audio file
`audio` (`dict`): A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`.
`sentence` (`string`): The sentence the user was prompted to speak
`up_votes` (`int64`): How many upvotes the audio file has received from reviewers
`down_votes` (`int64`): How many downvotes the audio file has received from reviewers
`age` (`string`): The age of the speaker (e.g. `teens`, `twenties`, `fifties`)
`gender` (`string`): The gender of the speaker
`accent` (`string`): Accent of the speaker
`locale` (`string`): The locale of the speaker
`segment` (`string`): Usually an empty field
### Data Splits
The speech material has been subdivided into portions for dev, train, test, validated, invalidated, reported and other.
The validated data is data that has been validated with reviewers and received upvotes that the data is of high quality.
The invalidated data is data has been invalidated by reviewers
and received downvotes indicating that the data is of low quality.
The reported data is data that has been reported, for different reasons.
The other data is data that has not yet been reviewed.
The dev, test, train are all data that has been reviewed, deemed of high quality and split into dev, test and train.
## Data Preprocessing Recommended by Hugging Face
The following are data preprocessing steps advised by the Hugging Face team. They are accompanied by an example code snippet that shows how to put them to practice.
Many examples in this dataset have trailing quotations marks, e.g _“the cat sat on the mat.“_. These trailing quotation marks do not change the actual meaning of the sentence, and it is near impossible to infer whether a sentence is a quotation or not a quotation from audio data alone. In these cases, it is advised to strip the quotation marks, leaving: _the cat sat on the mat_.
In addition, the majority of training sentences end in punctuation ( . or ? or ! ), whereas just a small proportion do not. In the dev set, **almost all** sentences end in punctuation. Thus, it is recommended to append a full-stop ( . ) to the end of the small number of training examples that do not end in punctuation.
```python
from datasets import load_dataset
ds = load_dataset("mozilla-foundation/common_voice_5_1", "en", use_auth_token=True)
def prepare_dataset(batch):
"""Function to preprocess the dataset with the .map method"""
transcription = batch["sentence"]
if transcription.startswith('"') and transcription.endswith('"'):
# we can remove trailing quotation marks as they do not affect the transcription
transcription = transcription[1:-1]
if transcription[-1] not in [".", "?", "!"]:
# append a full-stop to sentences that do not end in punctuation
transcription = transcription + "."
batch["sentence"] = transcription
return batch
ds = ds.map(prepare_dataset, desc="preprocess dataset")
```
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset.
## Considerations for Using the Data
### Social Impact of Dataset
The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset.
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
Public Domain, [CC-0](https://creativecommons.org/share-your-work/public-domain/cc0/)
### Citation Information
```
@inproceedings{commonvoice:2020,
author = {Ardila, R. and Branson, M. and Davis, K. and Henretty, M. and Kohler, M. and Meyer, J. and Morais, R. and Saunders, L. and Tyers, F. M. and Weber, G.},
title = {Common Voice: A Massively-Multilingual Speech Corpus},
booktitle = {Proceedings of the 12th Conference on Language Resources and Evaluation (LREC 2020)},
pages = {4211--4215},
year = 2020
}
```
| 10,544 | [
[
-0.04083251953125,
-0.054107666015625,
0.00989532470703125,
0.034454345703125,
-0.0187835693359375,
0.0027256011962890625,
-0.042816162109375,
-0.017608642578125,
0.031036376953125,
0.040985107421875,
-0.057037353515625,
-0.07177734375,
-0.032867431640625,
0... |
indonesian-nlp/mc4-id | 2022-10-25T11:52:34.000Z | [
"task_categories:text-generation",
"task_ids:language-modeling",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"source_datasets:extended",
"language:id",
"license:odc-by",
"arxiv:1910.10683",
"region:us"
] | indonesian-nlp | A thoroughly cleaned version of the Italian portion of the multilingual
colossal, cleaned version of Common Crawl's web crawl corpus (mC4) by AllenAI.
Based on Common Crawl dataset: "https://commoncrawl.org".
This is the processed version of Google's mC4 dataset by AllenAI, with further cleaning
detailed in the repository README file. | @article{JMLR:v21:20-074,
author = {Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu},
title = {Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer},
journal = {Journal of Machine Learning Research},
year = {2020},
volume = {21},
number = {140},
pages = {1-67},
url = {http://jmlr.org/papers/v21/20-074.html}
} | 3 | 3 | 2022-03-02T23:29:22 | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- id
license:
- odc-by
multilinguality:
- monolingual
size_categories:
tiny:
- 1M<n<10M
small:
- 10M<n<100M
medium:
- 10M<n<100M
large:
- 10M<n<100M
full:
- 100M<n<1B
source_datasets:
- extended
task_categories:
- text-generation
task_ids:
- language-modeling
paperswithcode_id: mc4
pretty_name: mC4-id
---
# Dataset Card for Clean(maybe) Indonesia mC4
## Dataset Description
- **Original Homepage:** [HF Hub](https://huggingface.co/datasets/allenai/c4)
- **Paper:** [ArXiv](https://arxiv.org/abs/1910.10683)
### Dataset Summary
A thoroughly cleaned version of the Indonesia split of the multilingual colossal, cleaned version of Common Crawl's web crawl corpus (mC4). Based on the [Common Crawl dataset](https://commoncrawl.org). The original version was prepared by [AllenAI](https://allenai.org/), hosted at the address [https://huggingface.co/datasets/allenai/c4](https://huggingface.co/datasets/allenai/c4).
### Data Fields
The data contains the following fields:
- `url`: url of the source as a string
- `text`: text content as a string
- `timestamp`: timestamp of extraction as a string
### Data Splits
You can load any subset like this:
```python
from datasets import load_dataset
mc4_id_tiny = load_dataset("munggok/mc4-id", "tiny")
```
Since splits are quite large, you may want to traverse them using the streaming mode available starting from 🤗 Datasets v1.9.0:
```python
from datasets import load_dataset
mc4_id_full_stream = load_dataset("munggok/mc4-id", "full", split='train', streaming=True)
print(next(iter(mc4_id_full_stream))) # Prints the example presented above
```
## Dataset Creation
Refer to the original paper for more considerations regarding the choice of sources and the scraping process for creating `mC4`.
## Considerations for Using the Data
### Discussion of Biases
Despite the cleaning procedure aimed at removing vulgarity and profanity, it must be considered that model trained on this scraped corpus will inevitably reflect biases present in blog articles and comments on the Internet. This makes the corpus especially interesting in the context of studying data biases and how to limit their impacts.
## Additional Information
### Dataset Curators
Authors at AllenAI are the original curators for the `mc4` corpus.
### Licensing Information
AllenAI are releasing this dataset under the terms of ODC-BY. By using this, you are also bound by the Common Crawl terms of use in respect of the content contained in the dataset.
### Citation Information
If you use this dataset in your work, please cite us and the original mC4 authors as:
```
@inproceedings{xue-etal-2021-mt5,
title = "m{T}5: A Massively Multilingual Pre-trained Text-to-Text Transformer",
author = "Xue, Linting and
Constant, Noah and
Roberts, Adam and
Kale, Mihir and
Al-Rfou, Rami and
Siddhant, Aditya and
Barua, Aditya and
Raffel, Colin",
booktitle = "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
month = jun,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.naacl-main.41",
doi = "10.18653/v1/2021.naacl-main.41",
pages = "483--498",
}
```
### Contributions
Thanks to [@dirkgr](https://github.com/dirkgr) and [@lhoestq](https://github.com/lhoestq) for adding this dataset.
| 3,579 | [
[
-0.042877197265625,
-0.039764404296875,
0.02392578125,
0.01041412353515625,
-0.0215911865234375,
0.00565338134765625,
-0.02099609375,
-0.037872314453125,
0.039581298828125,
0.037628173828125,
-0.045989990234375,
-0.043670654296875,
-0.03253173828125,
0.04959... |
mvarma/medwiki | 2022-10-25T09:51:06.000Z | [
"task_categories:text-retrieval",
"task_ids:entity-linking-retrieval",
"annotations_creators:machine-generated",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:extended|wikipedia",
"language:en-US",
"language:en",
"license:cc-by-4.0",
... | mvarma | MedWiki is a large-scale sentence dataset collected from Wikipedia with medical entity (UMLS) annotations. This dataset is intended for pretraining. | @inproceedings{medwiki,
title={Cross-Domain Data Integration for Named Entity Disambiguation in Biomedical Text},
author={Maya Varma and Laurel Orr and Sen Wu and Megan Leszczynski and Xiao Ling and Christopher Ré},
year={2021},
booktitle={Findings of the Association for Computational Linguistics: EMNLP 2021}
} | 3 | 3 | 2022-03-02T23:29:22 | ---
YAML tags:
annotations_creators:
- machine-generated
language_creators:
- crowdsourced
language:
- en-US
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: medwiki
size_categories:
- unknown
source_datasets:
- extended|wikipedia
task_categories:
- text-retrieval
task_ids:
- entity-linking-retrieval
---
# Dataset Card for MedWiki
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** [Github](https://github.com/HazyResearch/medical-ned-integration)
- **Paper:** [Cross-Domain Data Integration for Named Entity Disambiguation in Biomedical Text](https://arxiv.org/abs/2110.08228)
- **Point of Contact:** [Maya Varma](mailto:mvarma2@stanford.edu)
### Dataset Summary
MedWiki is a large sentence dataset collected from a medically-relevant subset of Wikipedia and annotated with biomedical entities in the Unified Medical Language System (UMLS) knowledge base. For each entity, we include a rich set of types sourced from both UMLS and WikiData. Consisting of over 13 million sentences and 17 million entity annotations, MedWiki can be utilized as a pretraining resource for language models and can improve performance of medical named entity recognition and disambiguation systems, especially on rare entities.
Here, we include two configurations of MedWiki (further details in [Dataset Creation](#dataset-creation)):
- `MedWiki-Full` is a large sentence dataset with UMLS medical entity annotations generated through the following two steps: (1) a weak labeling proecedure to annotate WikiData entities in sentences and (2) a data integration approach that maps WikiData entities to their counterparts in UMLS.
- `MedWiki-HQ` is a subset of MedWiki-Full with higher quality labels designed to limit noise that arises from the annotation procedure listed above.
### Languages
The text in the dataset is in English and was obtained from English Wikipedia.
## Dataset Structure
### Data Instances
A typical data point includes a sentence collected from Wikipedia annotated with UMLS medical entities and associated titles and types.
An example from the MedWiki test set looks as follows:
```
{'sent_idx_unq': 57000409,
'sentence': "The hair , teeth , and skeletal side effects of TDO are lifelong , and treatment is used to manage those effects .",
'mentions': ['tdo'],
'entities': ['C2931236'],
'entity_titles': ['Tricho-dento-osseous syndrome 1'],
'types': [['Disease or Syndrome', 'disease', 'rare disease', 'developmental defect during embryogenesis', 'malformation syndrome with odontal and/or periodontal component', 'primary bone dysplasia with increased bone density', 'syndromic hair shaft abnormality']],
'spans': [[10, 11]]}
```
### Data Fields
- `sent_idx_unq`: a unique integer identifier for the data instance
- `sentence`: a string sentence collected from English Wikipedia. Punctuation is separated from words, and the sentence can be tokenized into word-pieces with the .split() method.
- `mentions`: list of medical mentions in the sentence.
- `entities`: list of UMLS medical entity identifiers corresponding to mentions. There is exactly one entity for each mention, and the length of the `entities` list is equal to the length of the `mentions` list.
- `entity_titles`: List of English titles collected from UMLS that describe each entity. The length of the `entity_titles` list is equal to the length of the `entities` list.
- `types`: List of category types associated with each entity, including types collected from UMLS and WikiData.
- `spans`: List of integer pairs representing the word span of each mention in the sentence.
### Data Splits
MedWiki includes two configurations: MedWiki-Full and MedWiki-HQ (described further in [Dataset Creation](#dataset-creation)). For each configuration, data is split into training, development, and test sets. The split sizes are as follow:
| | Train | Dev | Test |
| ----- | ------ | ----- | ---- |
| MedWiki-Full Sentences |11,784,235 | 649,132 | 648,608 |
| MedWiki-Full Mentions |15,981,347 | 876,586 | 877,090 |
| MedWiki-Full Unique Entities | 230,871 | 55,002 | 54,772 |
| MedWiki-HQ Sentences | 2,962,089 | 165,941 | 164,193 |
| MedWiki-HQ Mentions | 3,366,108 | 188,957 | 186,622 |
| MedWiki-HQ Unique Entities | 118,572 | 19,725 | 19,437 |
## Dataset Creation
### Curation Rationale
Existing medical text datasets are generally limited in scope, often obtaining low coverage over the entities and structural resources in the UMLS medical knowledge base. When language models are trained across such datasets, the lack of adequate examples may prevent models from learning the complex reasoning patterns that are necessary for performing effective entity linking or disambiguation, especially for rare entities as shown in prior work by [Orr et al.](http://cidrdb.org/cidr2021/papers/cidr2021_paper13.pdf). Wikipedia, which is often utilized as a rich knowledge source in general text settings, contains references to medical terms and can help address this issue. Here, we curate the MedWiki dataset, which is a large-scale, weakly-labeled dataset that consists of sentences from Wikipedia annotated with medical entities in the UMLS knowledge base. MedWiki can serve as a pretraining dataset for language models and holds potential for improving performance on medical named entity recognition tasks, especially on rare entities.
### Source Data
#### Initial Data Collection and Normalization
MedWiki consists of sentences obtained from the November 2019 dump of English Wikipedia. We split pages into an 80/10/10 train/dev/test split and then segment each page at the sentence-level. This ensures that all sentences associated with a single Wikipedia page are placed in the same split.
#### Who are the source language producers?
The source language producers are editors on English Wikipedia.
### Annotations
#### Annotation process
We create two configurations of our dataset: MedWiki-Full and MedWiki-HQ. We label MedWiki-Full by first annotating all English Wikipedia articles with textual mentions and corresponding WikiData entities; we do so by obtaining gold entity labels from internal page links as well as generating weak labels based on pronouns and alternative entity names (see [Orr et al. 2020](http://cidrdb.org/cidr2021/papers/cidr2021_paper13.pdf) for additional information). Then, we use the off-the-shelf entity linker [Bootleg](https://github.com/HazyResearch/bootleg) to map entities in WikiData to their counterparts in the 2017AA release of the Unified Medical Language System (UMLS), a standard knowledge base for biomedical entities (additional implementation details in forthcoming publication). Any sentence containing at least one UMLS entity is included in MedWiki-Full. We also include types associated with each entity, which are collected from both WikiData and UMLS using the generated UMLS-Wikidata mapping. It is important to note that types obtained from WikiData are filtered according to methods described in [Orr et al. 2020](http://cidrdb.org/cidr2021/papers/cidr2021_paper13.pdf).
Since our labeling procedure introduces some noise into annotations, we also release the MedWiki-HQ dataset configuration with higher-quality labels. To generate MedWiki-HQ, we filtered the UMLS-Wikidata mappings to only include pairs of UMLS medical entities and WikiData items that share a high textual overlap between titles. MedWiki-HQ is a subset of MedWiki-Full.
To evaluate the quality of our UMLS-Wikidata mappings, we find that WikiData includes a small set of "true" labeled mappings between UMLS entities and WikiData items. (Note that we only include WikiData items associated with linked Wikipedia pages.) This set comprises approximately 9.3k UMLS entities in the original UMLS-Wikidata mapping (used for MedWiki-Full) and 5.6k entities in the filtered UMLS-Wikidata mapping (used for MedWiki-HQ). Using these labeled sets, we find that our mapping accuracy is 80.2% for the original UMLS-Wikidata mapping and 94.5% for the filtered UMLS-Wikidata mapping. We also evaluate integration performance on this segment as the proportion of mapped WikiData entities that share a WikiData type with the true entity, suggesting the predicted mapping adds relevant structural resources. Integration performance is 85.4% for the original UMLS-Wikidata mapping and 95.9% for the filtered UMLS-Wikidata mapping. The remainder of items in UMLS have no “true” mappings to WikiData.
#### Who are the annotators?
The dataset was labeled using weak-labeling techniques as described above.
### Personal and Sensitive Information
No personal or sensitive information is included in MedWiki.
## Considerations for Using the Data
### Social Impact of Dataset
The purpose of this dataset is to enable the creation of better named entity recognition systems for biomedical text. MedWiki encompasses a large set of entities in the UMLS knowledge base and includes a rich set of types associated with each entity, which can enable the creation of models that achieve high performance on named entity recognition tasks, especially on rare or unpopular entities. Such systems hold potential for improving automated parsing and information retrieval from large quantities of biomedical text.
### Discussion of Biases
The data included in MedWiki comes from English Wikipedia. Generally, Wikipedia articles are neutral in point of view and aim to avoid bias. However, some [prior work](https://www.hbs.edu/ris/Publication%20Files/15-023_e044cf50-f621-4759-a827-e9a3bf8920c0.pdf) has shown that ideological biases may exist within some Wikipedia articles, especially those that are focused on political issues or those that are written by fewer authors. We anticipate that such biases are rare for medical articles, which are typically comprised of scientific facts. However, it is important to note that bias encoded in Wikipedia is likely to be reflected by MedWiki.
### Other Known Limitations
Since MedWiki was annotated using weak labeling techniques, there is likely some noise in entity annotations. (Note that to address this, we include the MedWiki-HQ configuration, which is a subset of MedWiki-Full with higher quality labels. Additional details in [Dataset Creation](#dataset-creation)).
## Additional Information
### Dataset Curators
MedWiki was curated by Maya Varma, Laurel Orr, Sen Wu, Megan Leszczynski, Xiao Ling, and Chris Ré.
### Licensing Information
Dataset licensed under CC BY 4.0.
### Citation Information
```
@inproceedings{varma-etal-2021-cross-domain,
title = "Cross-Domain Data Integration for Named Entity Disambiguation in Biomedical Text",
author = "Varma, Maya and
Orr, Laurel and
Wu, Sen and
Leszczynski, Megan and
Ling, Xiao and
R{\'e}, Christopher",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2021",
month = nov,
year = "2021",
address = "Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.findings-emnlp.388",
pages = "4566--4575",
}
```
### Contributions
Thanks to [@maya124](https://github.com/maya124) for adding this dataset.
| 12,309 | [
[
-0.0396728515625,
-0.052001953125,
0.0296478271484375,
-0.00919342041015625,
-0.02777099609375,
-0.01145172119140625,
-0.0284881591796875,
-0.036376953125,
0.03485107421875,
0.03997802734375,
-0.040008544921875,
-0.055572509765625,
-0.0277862548828125,
0.039... |
nateraw/cats_vs_dogs | 2022-10-20T18:41:56.000Z | [
"task_categories:other",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:unknown",
"region:us"
] | nateraw | null | @Inproceedings (Conference){asirra-a-captcha-that-exploits-interest-aligned-manual-image-categorization,
author = {Elson, Jeremy and Douceur, John (JD) and Howell, Jon and Saul, Jared},
title = {Asirra: A CAPTCHA that Exploits Interest-Aligned Manual Image Categorization},
booktitle = {Proceedings of 14th ACM Conference on Computer and Communications Security (CCS)},
year = {2007},
month = {October},
publisher = {Association for Computing Machinery, Inc.},
url = {https://www.microsoft.com/en-us/research/publication/asirra-a-captcha-that-exploits-interest-aligned-manual-image-categorization/},
edition = {Proceedings of 14th ACM Conference on Computer and Communications Security (CCS)},
} | 0 | 3 | 2022-03-02T23:29:22 | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language:
- en
license:
- unknown
multilinguality:
- monolingual
pretty_name: Cats and Dogs
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- other
task_ids:
- other-other-image-classification
---
# Dataset Card for Cats Vs. Dogs
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**[Cats vs Dogs Dataset](https://www.microsoft.com/en-us/download/details.aspx?id=54765)
- **Repository:** N/A
- **Paper:**[Paper](https://www.microsoft.com/en-us/research/wp-content/uploads/2007/10/CCS2007.pdf)
- **Leaderboard:** N/A
- **Point of Contact:** N/A
### Dataset Summary
A large set of images of cats and dogs. There are 1738 corrupted images that are dropped.
### Supported Tasks and Leaderboards
- image-classification
### Languages
English
## Dataset Structure
### Data Instances
A sample from the training set is provided below:
```
{
'image': '/root/.cache/huggingface/datasets/downloads/extracted/6e1e8c9052e9f3f7ecbcb4b90860668f81c1d36d86cc9606d49066f8da8bfb4f/PetImages/Cat/1.jpg',
'label': 0
}
```
### Data Fields
The data instances have the following fields:
- `image_file_path`: a `string` filepath to an image.
- `labels`: an `int` classification label.
### Data Splits
| name |train|
|----------|----:|
|cats_and_dogs|23410|
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```
@Inproceedings (Conference){asirra-a-captcha-that-exploits-interest-aligned-manual-image-categorization,
author = {Elson, Jeremy and Douceur, John (JD) and Howell, Jon and Saul, Jared},
title = {Asirra: A CAPTCHA that Exploits Interest-Aligned Manual Image Categorization},
booktitle = {Proceedings of 14th ACM Conference on Computer and Communications Security (CCS)},
year = {2007},
month = {October},
publisher = {Association for Computing Machinery, Inc.},
url = {https://www.microsoft.com/en-us/research/publication/asirra-a-captcha-that-exploits-interest-aligned-manual-image-categorization/},
edition = {Proceedings of 14th ACM Conference on Computer and Communications Security (CCS)},
}
```
### Contributions
Thanks to [@nateraw](https://github.com/nateraw) for adding this dataset.
| 4,028 | [
[
-0.0401611328125,
-0.03179931640625,
-0.00678253173828125,
0.01444244384765625,
-0.0260009765625,
0.0154876708984375,
-0.010101318359375,
-0.043365478515625,
0.038238525390625,
0.038665771484375,
-0.0474853515625,
-0.057037353515625,
-0.041595458984375,
0.02... |
nateraw/dummy-csv-dataset | 2021-10-20T04:32:17.000Z | [
"region:us"
] | nateraw | null | null | 0 | 3 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
piEsposito/br-quad-2.0 | 2021-02-05T16:05:51.000Z | [
"region:us"
] | piEsposito | Translates SQuAD 2.0 from english to portuguese using Google Cloud API | @article{2020braquad,
author = {{Esposito}, Wladimir and {Esposito}, Piero and {Tamais},
Ana Laura and {Gatti}, Daniel},
title = "{BrQuAD - Brazilian
Question-Answering Dataset: Dataset para benchmark de modelos de
Machine Learning para question-answering em
Portugu^es brasileiro traduzindo o SQuAD com Google Cloud API}",
year = 2020,
} | 0 | 3 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
pritamdeka/cord-19-fulltext | 2022-02-05T02:29:13.000Z | [
"region:us"
] | pritamdeka | null | null | 1 | 3 | 2022-03-02T23:29:22 | # Dataset Card for [pritamdeka/cord-19-fulltext]
## Dataset Description
### Dataset Summary
This is a modified [cord19](https://huggingface.co/datasets/cord19) dataset which contains only the fulltext field. This can be used directly for language modelling tasks.
### Languages
English
### Citation Information
```
@article{Wang2020CORD19TC,
title={CORD-19: The Covid-19 Open Research Dataset},
author={Lucy Lu Wang and Kyle Lo and Yoganand Chandrasekhar and Russell Reas and Jiangjiang Yang and Darrin Eide and
K. Funk and Rodney Michael Kinney and Ziyang Liu and W. Merrill and P. Mooney and D. Murdick and Devvret Rishi and
Jerry Sheehan and Zhihong Shen and B. Stilson and A. Wade and K. Wang and Christopher Wilhelm and Boya Xie and
D. Raymond and Daniel S. Weld and Oren Etzioni and Sebastian Kohlmeier},
journal={ArXiv},
year={2020}
}
```
| 870 | [
[
-0.0008406639099121094,
-0.0533447265625,
0.0008745193481445312,
0.0283203125,
-0.0206298828125,
-0.01275634765625,
-0.040740966796875,
-0.0161285400390625,
0.011627197265625,
0.026611328125,
-0.03924560546875,
-0.0552978515625,
-0.0111541748046875,
0.008804... |
projecte-aina/catalan_government_crawling | 2023-09-13T12:47:36.000Z | [
"task_categories:fill-mask",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:ca",
"license:cc0-1.0",
"arxiv:2107.07903",
"region:us"
] | projecte-aina | The Catalan Government Crawling Corpus is a 39-million-token web corpus of Catalan built from the web. It has been obtained by crawling the .gencat domain and subdomains, belonging to the Catalan Government during September and October 2020. It consists of 39.117.909 tokens, 1.565.433 sentences and 71.043 documents. Documents are separated by single new lines. It is a subcorpus of the Catalan Textual Corpus. | @inproceedings{armengol-estape-etal-2021-multilingual,
title = "Are Multilingual Models the Best Choice for Moderately Under-resourced Languages? {A} Comprehensive Assessment for {C}atalan",
author = "Armengol-Estap{\'e}, Jordi and
Carrino, Casimiro Pio and
Rodriguez-Penagos, Carlos and
de Gibert Bonet, Ona and
Armentano-Oller, Carme and
Gonzalez-Agirre, Aitor and
Melero, Maite and
Villegas, Marta",
booktitle = "Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.findings-acl.437",
doi = "10.18653/v1/2021.findings-acl.437",
pages = "4933--4946",
eprint={2107.07903},
archivePrefix={arXiv},
primaryClass={cs.CL}
} | 1 | 3 | 2022-03-02T23:29:22 | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- ca
license:
- cc0-1.0
multilinguality:
- monolingual
pretty_name: Catalan Government Crawling
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- fill-mask
task_ids: []
---
# Dataset Card for Catalan Government Crawling
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://zenodo.org/record/5511667
- **Paper:** [Are Multilingual Models the Best Choice for Moderately Under-resourced Languages? A Comprehensive Assessment for Catalan](https://arxiv.org/abs/2107.07903)
- **Point of Contact:** [ona.degibert@bsc.es](ona.degibert@bsc.es)
### Dataset Summary
The Catalan Government Crawling Corpus is a 39-million-token web corpus of Catalan built from the web. It has been obtained by crawling the .gencat domain and subdomains, belonging to the Catalan Government during September and October 2020. It consists of 39,117,909 tokens, 1,565,433 sentences and 71,043 documents. Documents are separated by single new lines. It is a subcorpus of the Catalan Textual Corpus.
### Supported Tasks and Leaderboards
This corpus is mainly intended to pretrain language models and word representations.
### Languages
The dataset is in Catalan (`ca-ES`).
## Dataset Structure
### Data Instances
```
{
'text': 'Títol: Estudi de tres marededéus del bisbat de Solsona\nResponsables del projecte: Pep Paret conservador–restaurador de l\'Àrea de Pintura i Escultura sobre fusta del CRBMC\nL\'objecte d\'aquest est
udi és un millor coneixement de l\'estat de conservació del patrimoni moble català, en concret de tres escultures romàniques del bisbat de Solsona.\nEs du a terme un estudi científic de tres marededéus del bisb
at de Solsona: la Mare de Déu de Queralt, la Mare de Déu de Coaner i la Mare de Déu de la Quar.\nLes imatges originals són romàniques, però totes elles han patit modificacions estructurals...'
}
```
### Data Fields
- `text` (str): Text.
### Data Splits
The dataset contains a single split: `train`.
## Dataset Creation
### Curation Rationale
We created this corpus to contribute to the development of language models in Catalan, a low-resource language.
### Source Data
#### Initial Data Collection and Normalization
The corpus has been obtained by crawling the all the `.gencat.cat` domains during July 2020.
For preprocessing we used [Corpus-Cleaner](https://github.com/TeMU-BSC/corpus-cleaner-acl), a modular Python-based toolkit to clean raw text corpora through generator pipelines.
#### Who are the source language producers?
The data comes from the official Catalan Government websites.
### Annotations
The dataset is unannotated.
#### Annotation process
[N/A]
#### Who are the annotators?
[N/A]
### Personal and Sensitive Information
Since all data comes from public websites, no anonymisation process was performed.
## Considerations for Using the Data
### Social Impact of Dataset
We hope this corpus contributes to the development of language models in Catalan, a low-resource language.
### Discussion of Biases
We are aware that since the data comes from public web pages, some biases may be present in the dataset. Nonetheless, we have not applied any steps to reduce their impact.
### Other Known Limitations
[N/A]
## Additional Information
### Dataset Curators
Text Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@bsc.es)
This work was funded by the [Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya](https://politiquesdigitals.gencat.cat/ca/inici/index.html#googtrans(ca|en) within the framework of [Projecte AINA](https://politiquesdigitals.gencat.cat/ca/economia/catalonia-ai/aina).
### Licensing Information
[Creative Commons CC0 1.0 Universal](https://creativecommons.org/publicdomain/zero/1.0/).
### Citation Information
```
@inproceedings{armengol-estape-etal-2021-multilingual,
title = "Are Multilingual Models the Best Choice for Moderately Under-resourced Languages? {A} Comprehensive Assessment for {C}atalan",
author = "Armengol-Estap{\'e}, Jordi and
Carrino, Casimiro Pio and
Rodriguez-Penagos, Carlos and
de Gibert Bonet, Ona and
Armentano-Oller, Carme and
Gonzalez-Agirre, Aitor and
Melero, Maite and
Villegas, Marta",
booktitle = "Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.findings-acl.437",
doi = "10.18653/v1/2021.findings-acl.437",
pages = "4933--4946",
eprint={2107.07903},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Contributions
Thanks to [@albertvillanova](https://github.com/albertvillanova) for adding this dataset. | 5,968 | [
[
-0.0305633544921875,
-0.033660888671875,
0.00725555419921875,
0.03778076171875,
-0.0196533203125,
0.02099609375,
-0.0274658203125,
-0.026275634765625,
0.047027587890625,
0.034576416015625,
-0.0206298828125,
-0.0701904296875,
-0.037445068359375,
0.01016998291... |
rocca/sims4-faces | 2022-03-12T06:58:39.000Z | [
"region:us"
] | rocca | null | null | 1 | 3 | 2022-03-02T23:29:22 | A collection of >200k screenshots from the Sims 4 character creator (face and upper-torso only), using the randomize button.
* There are ~100k masculine faces (`masc` folder), ~100k feminine faces (`fem` folder), ~12k faces with a masculine physical frame and feminine attire/makeup (`masc2fem` folder).
* All images are 917x917.
* Each image is about 40kb.
* The examples below are cropped slightly off-center, but in the actual data the characters are more centered.
* The files are named from `1.jpg` through to `N.jpg` (no zero-padding). For `fem`, `N=101499`. For `masc`, `N=103615`. For `masc2fem`, `N=12123`.
## fem examples:

## masc examples:

## masc2fem examples:

| 853 | [
[
-0.052337646484375,
-0.0174102783203125,
0.042694091796875,
0.026031494140625,
-0.01116180419921875,
0.006130218505859375,
0.02655029296875,
0.0025577545166015625,
0.005218505859375,
0.07916259765625,
-0.0836181640625,
-0.0273895263671875,
-0.0169677734375,
... |
sagnikrayc/quasar | 2022-10-25T09:54:36.000Z | [
"task_categories:question-answering",
"task_ids:open-domain-qa",
"task_ids:extractive-qa",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"language:en-US",
"license:bsd-3-clause",
"arxiv:1707.03904",
"region:us"
] | sagnikrayc | We present two new large-scale datasets aimed at evaluating systems designed to comprehend a natural language query and extract its answer from a large corpus of text. The Quasar-S dataset consists of 37000 cloze-style (fill-in-the-gap) queries constructed from definitions of software entity tags on the popular website Stack Overflow. The posts and comments on the website serve as the background corpus for answering the cloze questions. The Quasar-T dataset consists of 43000 open-domain trivia questions and their answers obtained from various internet sources. ClueWeb09 serves as the background corpus for extracting these answers. We pose these datasets as a challenge for two related subtasks of factoid Question Answering: (1) searching for relevant pieces of text that include the correct answer to a query, and (2) reading the retrieved text to answer the query. | @article{dhingra2017quasar,
title={Quasar: Datasets for Question Answering by Search and Reading},
author={Dhingra, Bhuwan and Mazaitis, Kathryn and Cohen, William W},
journal={arXiv preprint arXiv:1707.03904},
year={2017}
} | 0 | 3 | 2022-03-02T23:29:22 | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- en-US
license:
- bsd-3-clause
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
-
task_categories:
- question-answering
task_ids:
- open-domain-qa
- extractive-qa
paperswithcode_id: quasar-1
---
# Dataset Card Creation Guide
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** N/A
- **Repository:** [GitHub](https://github.com/bdhingra/quasar)
- **Paper:** [Quasar: Datasets for Question Answering by Search and Reading](https://arxiv.org/abs/1707.03904)
- **Leaderboard:** N/A
- **Point of Contact:** -
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
[More Information Needed]
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
[More Information Needed]
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
| 2,858 | [
[
-0.03753662109375,
-0.0343017578125,
0.006404876708984375,
0.01221466064453125,
-0.018768310546875,
0.00799560546875,
-0.004085540771484375,
-0.0237579345703125,
0.03289794921875,
0.046051025390625,
-0.066162109375,
-0.0677490234375,
-0.0421142578125,
0.0042... |
seamew/THUCNewsTitle | 2021-08-24T01:22:11.000Z | [
"region:us"
] | seamew | null | null | 0 | 3 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
unicamp-dl/mrobust | 2022-10-02T22:39:57.000Z | [
"arxiv:2108.13897",
"arxiv:2105.06813",
"arxiv:2209.13738",
"region:us"
] | unicamp-dl | Robust04 translated datasets | # @misc{bonifacio2021mmarco,
# title={mMARCO: A Multilingual Version of the MS MARCO Passage Ranking Dataset},
# author={Luiz Henrique Bonifacio and Israel Campiotti and Vitor Jeronymo and Hugo Queiroz Abonizio and Roberto Lotufo and Rodrigo Nogueira},
# year={2021},
# eprint={2108.13897},
# archivePrefix={arXiv},
# primaryClass={cs.CL}
# }
# | 1 | 3 | 2022-03-02T23:29:22 | # Dataset Summary
**mRobust** is a multilingual version of the [TREC 2004 Robust passage ranking dataset](https://trec.nist.gov/data/robust/04.guidelines.html).
For more information, checkout our papers:
<!-- * [**mRobust: A Multilingual Version of the MS MARCO Passage Ranking Dataset**](https://arxiv.org/abs/2108.13897)
* [**A cost-benefit analysis of cross-lingual transfer methods**](https://arxiv.org/abs/2105.06813) -->
The current version is composed 10 languages: Chinese, French, German, Indonesian, Italian, Portuguese, Russian, Spanish, Dutch and Vietnamese.
### Supported languages
| Language name | Language code |
|---------------|---------------|
| English | english |
| Chinese | chinese |
| French | french |
| German | german |
| Indonesian | indonesian |
| Italian | italian |
| Portuguese | portuguese |
| Russian | russian |
| Spanish | spanish |
| Dutch | dutch |
| Vietnamese | vietnamese |
# Dataset Structure
You can load mRobust dataset by choosing a specific language. We include the translated collections of documents and queries.
#### Queries
```python
>>> dataset = load_dataset('unicamp-dl/mrobust', 'queries-spanish')
>>> dataset['queries'][1]
{'id': '302', 'text': '¿Está controlada la enfermedad de la poliomielitis (polio) en el mundo?'}
```
#### Collection
```python
>>> dataset = load_dataset('unicamp-dl/mrobust', 'collection-portuguese')
>>> dataset['collection'][5]
{'id': 'FT931-16660', 'text': '930105 FT 05 JAN 93 / Cenelec: Correção O endereço do Cenelec, Comitê Europeu de Normalização Eletrotécnica, estava incorreto na edição de ontem. É Rue de Stassart 35, B-1050, Bruxelas, Tel (322) 519 6871. CEN, Comitê Europeu de Normalização, está localizado na Rue de Stassart 36, B-1050, Bruxelas, Tel 519 6811.'}
```
# Citation Information
```
@misc{https://doi.org/10.48550/arxiv.2209.13738,
doi = {10.48550/ARXIV.2209.13738},
url = {https://arxiv.org/abs/2209.13738},
author = {Jeronymo, Vitor and Nascimento, Mauricio and Lotufo, Roberto and Nogueira, Rodrigo},
title = {mRobust04: A Multilingual Version of the TREC Robust 2004 Benchmark},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
| 2,259 | [
[
-0.018707275390625,
-0.036224365234375,
0.0125885009765625,
0.0265045166015625,
-0.0228271484375,
0.006038665771484375,
-0.021392822265625,
-0.0350341796875,
0.0261383056640625,
0.0234222412109375,
-0.02886962890625,
-0.06005859375,
-0.031280517578125,
0.036... |
usc-isi/WikiConvert | 2022-10-24T17:40:43.000Z | [
"task_categories:fill-mask",
"task_categories:other",
"task_categories:text-generation",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:extended|wikipedia",
"language:en",
... | usc-isi | Language Modelling with Cardinal Number Annotations. | @inproceedings{thawani-etal-2021-numeracy,
title = "Numeracy enhances the Literacy of Language Models",
author = "Thawani, Avijit and
Pujara, Jay and
Ilievski, Filip",
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.557",
pages = "6960--6967",
abstract = "Specialized number representations in NLP have shown improvements on numerical reasoning tasks like arithmetic word problems and masked number prediction. But humans also use numeracy to make better sense of world concepts, e.g., you can seat 5 people in your {`}room{'} but not 500. Does a better grasp of numbers improve a model{'}s understanding of other concepts and words? This paper studies the effect of using six different number encoders on the task of masked word prediction (MWP), as a proxy for evaluating literacy. To support this investigation, we develop Wiki-Convert, a 900,000 sentence dataset annotated with numbers and units, to avoid conflating nominal and ordinal number occurrences. We find a significant improvement in MWP for sentences containing numbers, that exponent embeddings are the best number encoders, yielding over 2 points jump in prediction accuracy over a BERT baseline, and that these enhanced literacy skills also generalize to contexts without annotated numbers. We release all code at https://git.io/JuZXn.",
} | 5 | 3 | 2022-03-02T23:29:22 | ---
language_creators:
- found
language:
- en
license:
- mit
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- extended|wikipedia
task_categories:
- fill-mask
- other
- text-generation
task_ids:
- language-modeling
- masked-language-modeling
pretty_name: Wiki-Convert
YAML tags:
- {}
- found
language_bcp47:
- en-US
tags:
- numeracy
- natural-language-understanding
- tokenization
---
# Dataset Card Creation Guide
## Table of Contents
- [Dataset Card Creation Guide](#dataset-card-creation-guide)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** [Github](https://github.com/avi-jit/numeracy-literacy)
- **Paper:** [Anthology](https://aclanthology.org/2021.emnlp-main.557)
- **Point of Contact:** [Avijit Thawani](mailto:thawani@isi.edu)
### Dataset Summary
Wiki-Convert is a 900,000+ sentences dataset of precise number annotations from English Wikipedia. It relies on Wiki contributors' annotations in the form of a [{{Convert}}](https://en.wikipedia.org/wiki/Template:Convert) template.
### Supported Tasks and Leaderboards
- `sequence-modeling`: The dataset can be used to train a model for [Language Mddeling], which consists in [TASK DESCRIPTION]. Success on this task is typically measured by achieving a low [perplexity](https://huggingface.co/transformers/perplexity.html).
### Languages
The dataset is extracted from English Wikipedia, hence overwhelmingly contains English text.
## Dataset Structure
### Data Instances
Each row in the json file contains metadata about the source Wikipedia sentence, along with annotations for a single number, e.g., `number: 10` in the below example. The annotations are inspired by Numeracy-600K and are in the form of `length` and `offset` from the beginning of the sentence.
```
{
'id': 1080801, 'UNIQUE_STORY_INDEX': '1080801', 'offset': 83, 'length': 2, 'magnitude': 0, 'comment': "Like all Type UB III submarines, UB-117 carried 10 torpedoes and was armed with a 10 cms deck gun. ''", 'number': 10
}
```
Please refer to https://github.com/avi-jit/numeracy-literacy for more details.
### Data Splits
| | Tain | Dev | Test |
| ----- | :------: | :-----: | :----: |
| Input Sentences | 739,583 | 92,447 | 92,449|
## License
Provided under MIT License.
## Citation
```
@inproceedings{thawani-etal-2021-numeracy,
title = "Numeracy enhances the Literacy of Language Models",
author = "Thawani, Avijit and
Pujara, Jay and
Ilievski, Filip",
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.557",
pages = "6960--6967",
abstract = "Specialized number representations in NLP have shown improvements on numerical reasoning tasks like arithmetic word problems and masked number prediction. But humans also use numeracy to make better sense of world concepts, e.g., you can seat 5 people in your {`}room{'} but not 500. Does a better grasp of numbers improve a model{'}s understanding of other concepts and words? This paper studies the effect of using six different number encoders on the task of masked word prediction (MWP), as a proxy for evaluating literacy. To support this investigation, we develop Wiki-Convert, a 900,000 sentence dataset annotated with numbers and units, to avoid conflating nominal and ordinal number occurrences. We find a significant improvement in MWP for sentences containing numbers, that exponent embeddings are the best number encoders, yielding over 2 points jump in prediction accuracy over a BERT baseline, and that these enhanced literacy skills also generalize to contexts without annotated numbers. We release all code at https://git.io/JuZXn.",
}
```
Thanks to [@avi-jit](https://github.com/avi-jit) for adding this dataset. | 5,417 | [
[
-0.047119140625,
-0.055755615234375,
0.001811981201171875,
0.011627197265625,
-0.0254669189453125,
-0.007167816162109375,
-0.027008056640625,
-0.017974853515625,
0.0205078125,
0.041107177734375,
-0.041229248046875,
-0.06207275390625,
-0.04571533203125,
0.031... |
valurank/hate-multi | 2022-10-25T09:57:06.000Z | [
"task_categories:text-classification",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:derived",
"language:en",
"license:other",
"region:us"
] | valurank | null | null | 0 | 3 | 2022-03-02T23:29:22 | ---
language:
- en
license: other
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- derived
task_categories:
- text-classification
---
# Dataset Card for hate-multi
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Dataset Creation](#dataset-creation)
- [Source Data](#source-data)
## Dataset Description
### Dataset Summary
This dataset contains a collection of text labeled as hate speech (class 1) or not (class 0).
## Dataset Creation
The dataset was creating by aggregating multiple publicly available datasets.
### Source Data
The following datasets were used:
* https://huggingface.co/datasets/hate_speech18 - Filtered to remove examples labeled as 'idk/skip', 'relation'
* https://huggingface.co/datasets/hate_speech_offensive - Tweet text cleaned by lower casing, removing mentions and urls. Dropped instanced labeled as 'offensive language'
* https://huggingface.co/datasets/ucberkeley-dlab/measuring-hate-speech - Tweet text cleaned by lower casing, removing mentions and urls. Dropped instanced with hatespeech == 1
| 1,171 | [
[
-0.044342041015625,
-0.037750244140625,
-0.01262664794921875,
0.0156097412109375,
-0.0254974365234375,
0.0189056396484375,
-0.01285552978515625,
-0.02301025390625,
0.03778076171875,
0.020599365234375,
-0.062164306640625,
-0.06097412109375,
-0.067138671875,
0... |
wikilee/ADFA_Mapping | 2022-03-20T04:19:27.000Z | [
"region:us"
] | wikilee | null | null | 0 | 3 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.0213775634765625,
-0.01497650146484375,
0.05718994140625,
0.02880859375,
-0.0350341796875,
0.046478271484375,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.0170135498046875,
-0.052093505859375,
-0.01497650146484375,
-0.0604248046875,
0.0379028... |
zapsdcn/citation_intent | 2021-12-08T20:16:11.000Z | [
"region:us"
] | zapsdcn | null | null | 0 | 3 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
zapsdcn/rct-20k | 2021-12-08T03:37:58.000Z | [
"region:us"
] | zapsdcn | null | null | 0 | 3 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
zhoujun/hitab | 2022-02-08T08:35:57.000Z | [
"region:us"
] | zhoujun | null | null | 1 | 3 | 2022-03-02T23:29:22 | annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
languages:
- en
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- tableqa, data2text
task_ids:
- tableqa | 232 | [
[
-0.03192138671875,
-0.0171661376953125,
0.00728607177734375,
0.053131103515625,
-0.0170440673828125,
0.00594329833984375,
-0.035491943359375,
-0.0338134765625,
0.04217529296875,
0.057586669921875,
-0.04302978515625,
-0.02874755859375,
-0.047210693359375,
0.0... |
zj88zj/PubMed_200k_RCT | 2021-12-11T18:12:48.000Z | [
"region:us"
] | zj88zj | null | null | 2 | 3 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
ruanchaves/hashset_distant | 2022-10-20T19:13:21.000Z | [
"annotations_creators:machine-generated",
"language_creators:machine-generated",
"multilinguality:multilingual",
"size_categories:unknown",
"source_datasets:original",
"language:hi",
"language:en",
"license:unknown",
"word-segmentation",
"arxiv:2201.06741",
"region:us"
] | ruanchaves | Hashset is a new dataset consisiting on 1.9k manually annotated and 3.3M loosely supervised tweets for testing the
efficiency of hashtag segmentation models. We compare State of The Art Hashtag Segmentation models on Hashset and other
baseline datasets (STAN and BOUN). We compare and analyse the results across the datasets to argue that HashSet can act
as a good benchmark for hashtag segmentation tasks.
HashSet Distant: 3.3M loosely collected camel cased hashtags containing hashtag and their segmentation. | @article{kodali2022hashset,
title={HashSet--A Dataset For Hashtag Segmentation},
author={Kodali, Prashant and Bhatnagar, Akshala and Ahuja, Naman and Shrivastava, Manish and Kumaraguru, Ponnurangam},
journal={arXiv preprint arXiv:2201.06741},
year={2022}
} | 0 | 3 | 2022-03-04T22:36:15 | ---
annotations_creators:
- machine-generated
language_creators:
- machine-generated
language:
- hi
- en
license:
- unknown
multilinguality:
- multilingual
size_categories:
- unknown
source_datasets:
- original
task_categories:
- structure-prediction
task_ids: []
pretty_name: HashSet Distant
tags:
- word-segmentation
---
# Dataset Card for HashSet Distant
## Dataset Description
- **Repository:** [prashantkodali/HashSet](https://github.com/prashantkodali/HashSet)
- **Paper:** [HashSet -- A Dataset For Hashtag Segmentation](https://arxiv.org/abs/2201.06741)
### Dataset Summary
Hashset is a new dataset consisiting on 1.9k manually annotated and 3.3M loosely supervised tweets for testing the
efficiency of hashtag segmentation models. We compare State of The Art Hashtag Segmentation models on Hashset and other
baseline datasets (STAN and BOUN). We compare and analyse the results across the datasets to argue that HashSet can act
as a good benchmark for hashtag segmentation tasks.
HashSet Distant: 3.3M loosely collected camel cased hashtags containing hashtag and their segmentation.
### Languages
Hindi and English.
## Dataset Structure
### Data Instances
```
{
'index': 282559,
'hashtag': 'Youth4Nation',
'segmentation': 'Youth 4 Nation'
}
```
## Dataset Creation
- All hashtag segmentation and identifier splitting datasets on this profile have the same basic fields: `hashtag` and `segmentation` or `identifier` and `segmentation`.
- The only difference between `hashtag` and `segmentation` or between `identifier` and `segmentation` are the whitespace characters. Spell checking, expanding abbreviations or correcting characters to uppercase go into other fields.
- There is always whitespace between an alphanumeric character and a sequence of any special characters ( such as `_` , `:`, `~` ).
- If there are any annotations for named entity recognition and other token classification tasks, they are given in a `spans` field.
## Additional Information
### Citation Information
```
@article{kodali2022hashset,
title={HashSet--A Dataset For Hashtag Segmentation},
author={Kodali, Prashant and Bhatnagar, Akshala and Ahuja, Naman and Shrivastava, Manish and Kumaraguru, Ponnurangam},
journal={arXiv preprint arXiv:2201.06741},
year={2022}
}
```
### Contributions
This dataset was added by [@ruanchaves](https://github.com/ruanchaves) while developing the [hashformers](https://github.com/ruanchaves/hashformers) library. | 2,475 | [
[
-0.036865234375,
-0.0494384765625,
0.021484375,
0.0015783309936523438,
-0.03167724609375,
0.021026611328125,
-0.021392822265625,
-0.055206298828125,
0.0103912353515625,
-0.0073394775390625,
-0.04071044921875,
-0.057830810546875,
-0.042877197265625,
0.0033760... |
ruanchaves/dev_stanford | 2022-10-20T19:13:37.000Z | [
"annotations_creators:expert-generated",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:original",
"language:en",
"license:unknown",
"word-segmentation",
"region:us"
] | ruanchaves | 1000 hashtags manually segmented by Çelebi et al. for development purposes,
randomly selected from the Stanford Sentiment Tweet Corpus by Sentiment140. | @article{celebi2018segmenting,
title={Segmenting hashtags and analyzing their grammatical structure},
author={Celebi, Arda and {\"O}zg{\"u}r, Arzucan},
journal={Journal of the Association for Information Science and Technology},
volume={69},
number={5},
pages={675--686},
year={2018},
publisher={Wiley Online Library}
} | 0 | 3 | 2022-03-05T07:28:41 | ---
annotations_creators:
- expert-generated
language_creators:
- machine-generated
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- unknown
source_datasets:
- original
task_categories:
- structure-prediction
task_ids: []
pretty_name: Dev-Stanford
tags:
- word-segmentation
---
# Dataset Card for Dev-Stanford
## Dataset Description
- **Repository:** [ardax/hashtag-segmentor](https://github.com/ardax/hashtag-segmentor)
- **Paper:** [Segmenting Hashtags and Analyzing Their Grammatical Structure](https://asistdl.onlinelibrary.wiley.com/doi/epdf/10.1002/asi.23989?author_access_token=qbKcE1jrre5nbv_Tn9csbU4keas67K9QMdWULTWMo8NOtY2aA39ck2w5Sm4ePQ1MZhbjCdEuaRlPEw2Kd12jzvwhwoWP0fdroZAwWsmXHPXxryDk_oBCup1i9_VDNIpU)
### Dataset Summary
1000 hashtags manually segmented by Çelebi et al. for development purposes,
randomly selected from the Stanford Sentiment Tweet Corpus by Sentiment140.
### Languages
English
## Dataset Structure
### Data Instances
```
{
"index": 15,
"hashtag": "marathonmonday",
"segmentation": "marathon monday"
}
```
### Data Fields
- `index`: a numerical index.
- `hashtag`: the original hashtag.
- `segmentation`: the gold segmentation for the hashtag.
## Dataset Creation
- All hashtag segmentation and identifier splitting datasets on this profile have the same basic fields: `hashtag` and `segmentation` or `identifier` and `segmentation`.
- The only difference between `hashtag` and `segmentation` or between `identifier` and `segmentation` are the whitespace characters. Spell checking, expanding abbreviations or correcting characters to uppercase go into other fields.
- There is always whitespace between an alphanumeric character and a sequence of any special characters ( such as `_` , `:`, `~` ).
- If there are any annotations for named entity recognition and other token classification tasks, they are given in a `spans` field.
## Additional Information
### Citation Information
```
@article{celebi2018segmenting,
title={Segmenting hashtags and analyzing their grammatical structure},
author={Celebi, Arda and {\"O}zg{\"u}r, Arzucan},
journal={Journal of the Association for Information Science and Technology},
volume={69},
number={5},
pages={675--686},
year={2018},
publisher={Wiley Online Library}
}
```
### Contributions
This dataset was added by [@ruanchaves](https://github.com/ruanchaves) while developing the [hashformers](https://github.com/ruanchaves/hashformers) library. | 2,504 | [
[
-0.035614013671875,
-0.066650390625,
0.0242919921875,
0.0168914794921875,
-0.0258331298828125,
0.01192474365234375,
-0.0252532958984375,
-0.0269012451171875,
0.029876708984375,
0.00968170166015625,
-0.047210693359375,
-0.07659912109375,
-0.039031982421875,
0... |
batterydata/paper-abstracts | 2022-09-05T15:54:02.000Z | [
"task_categories:text-classification",
"language:en",
"license:apache-2.0",
"region:us"
] | batterydata | null | null | 1 | 3 | 2022-03-05T13:55:17 | ---
language:
- en
license:
- apache-2.0
task_categories:
- text-classification
pretty_name: 'Battery Abstracts Dataset'
---
# Battery Abstracts Dataset
This dataset includes 29,472 battery papers and 17,191 non-battery papers, a total of 46,663 papers. These papers are manually labelled in terms of the journals to which they belong. 14 battery journals and 1,044 non battery journals were selected to form this database.
- training_data.csv: Battery papers: 20,629, Non-battery papers: 12,034. Total: 32,663.
- val_data.csv: Battery papers: 5,895, Non-battery papers: 3,438. Total: 9,333.
- test_data.csv: Battery papers: 2,948, Non-battery papers: 1,719. Total: 4,667.
# Usage
```
from datasets import load_dataset
dataset = load_dataset("batterydata/paper-abstracts")
```
# Citation
```
@article{huang2022batterybert,
title={BatteryBERT: A Pretrained Language Model for Battery Database Enhancement},
author={Huang, Shu and Cole, Jacqueline M},
journal={J. Chem. Inf. Model.},
year={2022},
doi={10.1021/acs.jcim.2c00035},
url={DOI:10.1021/acs.jcim.2c00035},
pages={DOI: 10.1021/acs.jcim.2c00035},
publisher={ACS Publications}
}
``` | 1,160 | [
[
-0.006999969482421875,
0.0002460479736328125,
0.043731689453125,
0.00783538818359375,
-0.01100921630859375,
0.01397705078125,
-0.0011091232299804688,
-0.0157318115234375,
0.020538330078125,
0.022979736328125,
-0.020904541015625,
-0.055206298828125,
-0.0261230468... |
ruanchaves/loyola | 2022-10-20T19:13:04.000Z | [
"annotations_creators:expert-generated",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:original",
"language:code",
"license:unknown",
"word-segmentation",
"region:us"
] | ruanchaves | In programming languages, identifiers are tokens (also called symbols) which name language entities.
Some of the kinds of entities an identifier might denote include variables, types, labels, subroutines, and packages.
The Loyola University of Delaware Identifier Splitting Oracle is a dataset for identifier segmentation,
i.e. the task of adding spaces between the words on a identifier. | @article{hill2014empirical,
title={An empirical study of identifier splitting techniques},
author={Hill, Emily and Binkley, David and Lawrie, Dawn and Pollock, Lori and Vijay-Shanker, K},
journal={Empirical Software Engineering},
volume={19},
number={6},
pages={1754--1780},
year={2014},
publisher={Springer}
} | 0 | 3 | 2022-03-05T19:23:21 | ---
annotations_creators:
- expert-generated
language_creators:
- machine-generated
language:
- code
license:
- unknown
multilinguality:
- monolingual
size_categories:
- unknown
source_datasets:
- original
task_categories:
- structure-prediction
task_ids: []
pretty_name: The Loyola University of Delaware Identifier Splitting Oracle
tags:
- word-segmentation
---
# Dataset Card for The Loyola University of Delaware Identifier Splitting Oracle
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Dataset Creation](#dataset-creation)
- [Additional Information](#additional-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** [Loyola University of Delaware Identifier Splitting Oracle](http://www.cs.loyola.edu/~binkley/ludiso/)
- **Paper:** [An empirical study of identifier splitting techniques](https://dl.acm.org/doi/10.1007/s10664-013-9261-0)
### Dataset Summary
In programming languages, identifiers are tokens (also called symbols) which name language entities.
Some of the kinds of entities an identifier might denote include variables, types, labels, subroutines, and packages.
The Loyola University of Delaware Identifier Splitting Oracle is a dataset for identifier segmentation,
i.e. the task of adding spaces between the words on a identifier.
### Languages
- Java
- C
- C++
## Dataset Structure
### Data Instances
```
{
"index": 0,
"identifier": "::CreateProcess",
"segmentation": ":: Create Process",
"language": "cpp",
"source": "mozilla-source-1.1"
}
```
### Data Fields
- `index`: a numerical index.
- `identifier`: the original identifier.
- `segmentation`: the gold segmentation for the identifier.
- `language`: the programming language of the source.
- `source`: the source of the identifier.
## Dataset Creation
- All hashtag segmentation and identifier splitting datasets on this profile have the same basic fields: `hashtag` and `segmentation` or `identifier` and `segmentation`.
- The only difference between `hashtag` and `segmentation` or between `identifier` and `segmentation` are the whitespace characters. Spell checking, expanding abbreviations or correcting characters to uppercase go into other fields.
- There is always whitespace between an alphanumeric character and a sequence of any special characters ( such as `_` , `:`, `~` ).
- If there are any annotations for named entity recognition and other token classification tasks, they are given in a `spans` field.
### Citation Information
```
@article{hill2014empirical,
title={An empirical study of identifier splitting techniques},
author={Hill, Emily and Binkley, David and Lawrie, Dawn and Pollock, Lori and Vijay-Shanker, K},
journal={Empirical Software Engineering},
volume={19},
number={6},
pages={1754--1780},
year={2014},
publisher={Springer}
}
```
### Contributions
This dataset was added by [@ruanchaves](https://github.com/ruanchaves) while developing the [hashformers](https://github.com/ruanchaves/hashformers) library. | 3,312 | [
[
-0.04571533203125,
-0.04864501953125,
0.0226593017578125,
0.0246124267578125,
-0.03826904296875,
0.020660400390625,
0.004451751708984375,
-0.035430908203125,
0.040130615234375,
0.0198516845703125,
-0.039703369140625,
-0.049072265625,
-0.036376953125,
0.00994... |
z-uo/qasper-squad | 2022-10-25T10:02:49.000Z | [
"task_categories:question-answering",
"task_ids:closed-domain-qa",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"language:en",
"region:us"
] | z-uo | null | null | 0 | 3 | 2022-03-08T09:20:15 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
task_categories:
- question-answering
task_ids:
- closed-domain-qa
pretty_name: qasper-squad
language_bcp47:
- en-US
---
# Quasper into squad version
This is a change of format of [qasper](https://huggingface.co/datasets/qasper) dataset into squad format. | 416 | [
[
-0.00799560546875,
-0.0225677490234375,
-0.01148223876953125,
0.032257080078125,
-0.0162811279296875,
0.033233642578125,
0.0294952392578125,
-0.0194244384765625,
0.053253173828125,
0.047027587890625,
-0.068603515625,
-0.01336669921875,
-0.03668212890625,
0.0... |
Non-Residual-Prompting/C2Gen | 2022-10-25T10:02:58.000Z | [
"task_categories:text-generation",
"size_categories:<100K",
"language:en",
"license:cc-by-sa-4.0",
"arxiv:1911.03705",
"region:us"
] | Non-Residual-Prompting | The task of C2Gen is to both generate commonsensical text which include the given words, and also have the generated text adhere to the given context. | TODO | 1 | 3 | 2022-03-09T16:09:50 | ---
language:
- en
license:
- cc-by-sa-4.0
size_categories:
- <100K
task_categories:
- text-generation
---
# Dataset Card for Contextualized CommonGen(C2Gen)
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Initial Data Collection and Normalization](#initial-cata-collection-and-normalization)
- [Licensing Information](#licensing-information)
## Dataset Description
- **Repository:** [Non-Residual Prompting](https://github.com/FreddeFrallan/Non-Residual-Prompting)
- **Paper:** [Fine-Grained Controllable Text Generation Using Non-Residual Prompting](https://aclanthology.org/2022.acl-long.471)
- **Point of Contact:** [Fredrik Carlsson](mailto:Fredrik.Carlsson@ri.se)
### Dataset Summary
CommonGen [Lin et al., 2020](https://arxiv.org/abs/1911.03705) is a dataset for the constrained text generation task of word inclusion. But the task does not allow to include context. Therefore, to complement CommonGen, we provide an extended test set C2Gen [Carlsson et al., 2022](https://aclanthology.org/2022.acl-long.471) where an additional context is provided for each set of target words. The task is therefore reformulated to both generate commonsensical text which include the given words, and also have the generated text adhere to the given context.
### Languages
English
## Dataset Structure
### Data Instances
{"Context": "The show came on the television with people singing. The family all gathered to watch. They all became silent when the show came on.", "Words": ["follow", "series", "voice"]}
### Data Fields
- context: the generated text by the model should adhere to this text
- words: the words that should be included in the generated continuation
### Data Splits
Test
## Dataset Creation
### Curation Rationale
C2Gen was created because the authors of the paper believed that the task formulation of CommonGen is too narrow, and that it needlessly incentivizes researchers
to focus on methods that do not support context. Which is orthogonal to their belief that many application areas necessitates the consideration of surrounding context. Therefore, to complement CommonGen, they provide an extended test set where an additional context is provided for each set of target words.
### Initial Data Collection and Normalization
The dataset was constructed with the help the crowd sourcing platform MechanicalTurk. Each remaining concept set manually received a textual context. To assure the quality of the data generation, only native English speakers with a recorded high acceptance were allowed to participate. Finally, all contexts were manually verified, and fixed in terms of typos and poor quality. Furthermore we want to raise awareness that C2GEN can contain personal data or offensive content. If you would encounter such a sample, please reach out to us.
## Licensing Information
license: cc-by-sa-4.0
| 3,177 | [
[
-0.04296875,
-0.0531005859375,
0.0118865966796875,
0.026641845703125,
-0.03662109375,
-0.019805908203125,
-0.042877197265625,
-0.031463623046875,
0.007015228271484375,
0.0250396728515625,
-0.07177734375,
-0.05865478515625,
-0.042694091796875,
0.0440368652343... |
damlab/uniprot | 2022-03-12T12:08:29.000Z | [
"region:us"
] | damlab | null | null | 3 | 3 | 2022-03-09T20:00:12 | ---
liscence: mit
---
# Dataset Description
## Dataset Summary
This dataset is a mirror of the Uniprot/SwissProt database. It contains the names and sequences of >500K proteins.
This dataset was parsed from the FASTA file at https://ftp.uniprot.org/pub/databases/uniprot/current_release/knowledgebase/complete/uniprot_sprot.fasta.gz.
Supported Tasks and Leaderboards: None
Languages: English
## Dataset Structure
### Data Instances
Data Fields: id, description, sequence
Data Splits: None
## Dataset Creation
The dataset was downloaded and parsed into a `dataset` object and uploaded unchanged.
Initial Data Collection and Normalization: Dataset was downloaded and curated on 03/09/2022.
## Considerations for Using the Data
Social Impact of Dataset: Due to the tendency of HIV to mutate, drug resistance is a common issue when attempting to treat those infected with HIV.
Protease inhibitors are a class of drugs that HIV is known to develop resistance via mutations.
Thus, by providing a collection of protease sequences known to be resistant to one or more drugs, this dataset provides a significant collection of data that could be utilized to perform computational analysis of protease resistance mutations.
Discussion of Biases: Due to the sampling nature of this database, it is predominantly composed genes from "well studied" genomes. This may impact the "broadness" of the genes contained.
## Additional Information:
- Dataset Curators: Will Dampier
- Citation Information: TBA
| 1,524 | [
[
-0.019500732421875,
-0.025421142578125,
0.0060882568359375,
0.006839752197265625,
-0.016571044921875,
-0.0007925033569335938,
0.0275115966796875,
-0.005489349365234375,
0.04351806640625,
0.049468994140625,
-0.056732177734375,
-0.034149169921875,
-0.0622253417968... |
Khedesh/PeymaNER | 2022-03-11T11:30:13.000Z | [
"region:us"
] | Khedesh | null | null | 1 | 3 | 2022-03-11T11:18:49 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
Wikram/autonlp-data-QuestionAnswer | 2022-03-12T22:20:04.000Z | [
"region:us"
] | Wikram | null | null | 0 | 3 | 2022-03-12T22:20:04 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
stjokerli/TextToText_multirc_seqio | 2022-03-19T12:45:30.000Z | [
"region:us"
] | stjokerli | null | null | 0 | 3 | 2022-03-13T09:31:12 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
wanyu/IteraTeR_full_sent | 2022-10-24T18:58:37.000Z | [
"task_categories:text2text-generation",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:apache-2.0",
"conditional-text-generation",
"text-editing",
"arxiv:2203.03802",
"region:us"
] | wanyu | null | null | 0 | 3 | 2022-03-13T19:29:50 | ---
annotations_creators:
- crowdsourced
language_creators:
- found
language:
- en
license:
- apache-2.0
multilinguality:
- monolingual
source_datasets:
- original
task_categories:
- text2text-generation
task_ids: []
pretty_name: IteraTeR_full_sent
language_bcp47:
- en-US
tags:
- conditional-text-generation
- text-editing
---
Paper: [Understanding Iterative Revision from Human-Written Text](https://arxiv.org/abs/2203.03802)
Authors: Wanyu Du, Vipul Raheja, Dhruv Kumar, Zae Myung Kim, Melissa Lopez, Dongyeop Kang
Github repo: https://github.com/vipulraheja/IteraTeR
| 575 | [
[
-0.00531768798828125,
-0.03533935546875,
0.05169677734375,
0.00943756103515625,
-0.0235748291015625,
0.0160980224609375,
-0.018524169921875,
-0.0180816650390625,
0.0008001327514648438,
0.05657958984375,
-0.046051025390625,
-0.0289306640625,
-0.0164947509765625,
... |
openclimatefix/uk_pv | 2022-11-30T17:02:42.000Z | [
"task_categories:time-series-forecasting",
"task_ids:multivariate-time-series-forecasting",
"annotations_creators:machine-generated",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:1B<n<10B",
"source_datasets:original",
"language:en",
"license:mit",
"pv",
... | openclimatefix | # UK PV dataset
PV solar generation data from the UK.
This dataset contains dataa from 1311 PV systems from 2018-01-01 to 2021-10-27.
The time series of solar generation is in 5 minutes chunks.
This data is from collected from live PV systems in the UK. We have obfuscated the location of the pv systems for privacy.
If you are the owner of a PV system in the dataset, and do not want this data to be shared,
please do get in contact with info@openclimatefix.org.
## Files
The dataset contains two files
- metadata.csv: Data about the PV systems, e.g location
- pv.netcdf: Time series of PV solar generation
### metadata.csv
Metadata of the different PV systems.
Note that there are extra PV systems in this metadata that do not appear in the pv timeseries data
The csv columns are
- ss_id: the id of the system
- latitude_rounded: latitude of the pv system, but rounded to approximately the nearest km
- longitude_rounded: latitude of the pv system, but rounded to approximately the nearest km
- llsoacd: TODO
- orientation: The orientation of the pv system
- tilt: The tilt of the pv system
- kwp: The capacity of the pv system
- operational_at: the datetime the pv system started working
### pv.netcdf
Time series data of pv solar generation data is in a [xarray](https://docs.xarray.dev/en/stable/) format.
The data variables are the same as 'ss_id' in the metadata.
Each data variable contains the solar generation (in kw) for that pv system.
The ss_id's here are a subset of the all the ss_id's in the metadata
The co-ordinates of the date are 'datetime' which is the datetime of the solar generation reading. | @InProceedings{uk_pv,
title = {UK PV solar generation dataset},
author={Open Climate Fix.
},
year={2022}
} | 6 | 3 | 2022-03-14T12:20:19 | ---
annotations_creators:
- machine-generated
language:
- en
language_creators:
- machine-generated
license:
- mit
multilinguality:
- monolingual
pretty_name: United Kingdom PV Solar generation
size_categories:
- 1B<n<10B
source_datasets:
- original
tags:
- pv
- photovoltaic
- environment
- climate
- energy
- electricity
task_categories:
- time-series-forecasting
task_ids:
- multivariate-time-series-forecasting
---
# UK PV dataset
PV solar generation data from the UK.
This dataset contains data from 1311 PV systems from 2018 to 2021.
Time granularity varies from 2 minutes to 30 minutes.
This data is collected from live PV systems in the UK. We have obfuscated the location of the PV systems for privacy.
If you are the owner of a PV system in the dataset, and do not want this data to be shared,
please do get in contact with info@openclimatefix.org.
## Files
- metadata.csv: Data about the PV systems, e.g location
- 2min.parquet: Power output for PV systems every 2 minutes.
- 5min.parquet: Power output for PV systems every 5 minutes.
- 30min.parquet: Power output for PV systems every 30 minutes.
- pv.netcdf: (legacy) Time series of PV solar generation every 5 minutes
### metadata.csv
Metadata of the different PV systems.
Note that there are extra PV systems in this metadata that do not appear in the PV time-series data.
The csv columns are:
- ss_id: the id of the system
- latitude_rounded: latitude of the PV system, but rounded to approximately the nearest km
- longitude_rounded: latitude of the PV system, but rounded to approximately the nearest km
- llsoacd: TODO
- orientation: The orientation of the PV system
- tilt: The tilt of the PV system
- kwp: The capacity of the PV system
- operational_at: the datetime the PV system started working
### {2,5,30}min.parquet
Time series of solar generation for a number of sytems.
Each file includes the systems for which there is enough granularity.
In particular the systems in 2min.parquet and 5min.parquet are also in 30min.parquet.
The files contain 3 columns:
- ss_id: the id of the system
- timestamp: the timestamp
- generation_wh: the generated power (in kW) at the given timestamp for the given system
### pv.netcdf (legacy)
Time series data of PV solar generation data is in an [xarray](https://docs.xarray.dev/en/stable/) format.
The data variables are the same as 'ss_id' in the metadata.
Each data variable contains the solar generation (in kW) for that PV system.
The ss_id's here are a subset of all the ss_id's in the metadata
The coordinates of the date are tagged as 'datetime' which is the datetime of the solar generation reading.
This is a subset of the more recent `5min.parquet` file.
## example
using Hugging Face Datasets
```python
from datasets import load_dataset
dataset = load_dataset("openclimatefix/uk_pv")
```
## useful links
https://huggingface.co/docs/datasets/share - this repo was made by following this tutorial | 2,942 | [
[
-0.020477294921875,
0.0091400146484375,
0.00975799560546875,
0.0183563232421875,
-0.03485107421875,
-0.00988006591796875,
0.00560760498046875,
-0.007656097412109375,
0.0087127685546875,
0.032012939453125,
-0.05926513671875,
-0.042144775390625,
-0.017181396484375... |
cgarciae/cartoonset | 2022-03-23T19:12:10.000Z | [
"size_categories:10K<n<100K",
"license:cc-by-4.0",
"arxiv:1711.05139",
"region:us"
] | cgarciae | Cartoon Set is a collection of random, 2D cartoon avatar images. The cartoons vary in 10 artwork
categories, 4 color categories, and 4 proportion categories, with a total of ~1013 possible
combinations. We provide sets of 10k and 100k randomly chosen cartoons and labeled attributes. | null | 11 | 3 | 2022-03-14T23:35:29 | ---
pretty_name: Cartoon Set
size_categories:
- 10K<n<100K
task_categories:
- image
- computer-vision
- generative-modelling
license: cc-by-4.0
---
# Dataset Card for Cartoon Set
## Table of Contents
- [Dataset Card for Cartoon Set](#dataset-card-for-cartoon-set)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Usage](#usage)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://google.github.io/cartoonset/
- **Repository:** https://github.com/google/cartoonset/
- **Paper:** XGAN: Unsupervised Image-to-Image Translation for Many-to-Many Mappings
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary

[Cartoon Set](https://google.github.io/cartoonset/) is a collection of random, 2D cartoon avatar images. The cartoons vary in 10 artwork categories, 4 color categories, and 4 proportion categories, with a total of ~10^13 possible combinations. We provide sets of 10k and 100k randomly chosen cartoons and labeled attributes.
#### Usage
`cartoonset` provides the images as PNG byte strings, this gives you a bit more flexibility into how to load the data. Here we show 2 ways:
**Using PIL:**
```python
import datasets
from io import BytesIO
from PIL import Image
ds = datasets.load_dataset("cgarciae/cartoonset", "10k") # or "100k"
def process_fn(sample):
img = Image.open(BytesIO(sample["img_bytes"]))
...
return {"img": img}
ds = ds.map(process_fn, remove_columns=["img_bytes"])
```
**Using TensorFlow:**
```python
import datasets
import tensorflow as tf
hfds = datasets.load_dataset("cgarciae/cartoonset", "10k") # or "100k"
ds = tf.data.Dataset.from_generator(
lambda: hfds,
output_signature={
"img_bytes": tf.TensorSpec(shape=(), dtype=tf.string),
},
)
def process_fn(sample):
img = tf.image.decode_png(sample["img_bytes"], channels=3)
...
return {"img": img}
ds = ds.map(process_fn)
```
**Additional features:**
You can also access the features that generated each sample e.g:
```python
ds = datasets.load_dataset("cgarciae/cartoonset", "10k+features") # or "100k+features"
```
Apart from `img_bytes` these configurations add a total of 18 * 2 additional `int` features, these come in `{feature}`, `{feature}_num_categories` pairs where `num_categories` indicates the number of categories for that feature. See [Data Fields](#data-fields) for the complete list of features.
## Dataset Structure
### Data Instances
A sample from the training set is provided below:
```python
{
'img_bytes': b'0x...',
}
```
If `+features` is added to the dataset name, the following additional fields are provided:
```python
{
'img_bytes': b'0x...',
'eye_angle': 0,
'eye_angle_num_categories': 3,
'eye_lashes': 0,
'eye_lashes_num_categories': 2,
'eye_lid': 0,
'eye_lid_num_categories': 2,
'chin_length': 2,
'chin_length_num_categories': 3,
...
}
```
### Data Fields
- `img_bytes`: A byte string containing the raw data of a 500x500 PNG image.
If `+features` is appended to the dataset name, the following additional `int32` fields are provided:
- `eye_angle`
- `eye_angle_num_categories`
- `eye_lashes`
- `eye_lashes_num_categories`
- `eye_lid`
- `eye_lid_num_categories`
- `chin_length`
- `chin_length_num_categories`
- `eyebrow_weight`
- `eyebrow_weight_num_categories`
- `eyebrow_shape`
- `eyebrow_shape_num_categories`
- `eyebrow_thickness`
- `eyebrow_thickness_num_categories`
- `face_shape`
- `face_shape_num_categories`
- `facial_hair`
- `facial_hair_num_categories`
- `facial_hair_num_categories`
- `facial_hair_num_categories`
- `hair`
- `hair_num_categories`
- `hair_num_categories`
- `hair_num_categories`
- `eye_color`
- `eye_color_num_categories`
- `face_color`
- `face_color_num_categories`
- `hair_color`
- `hair_color_num_categories`
- `glasses`
- `glasses_num_categories`
- `glasses_color`
- `glasses_color_num_categories`
- `eyes_slant`
- `eye_slant_num_categories`
- `eyebrow_width`
- `eyebrow_width_num_categories`
- `eye_eyebrow_distance`
- `eye_eyebrow_distance_num_categories`
### Data Splits
Train
## Dataset Creation
### Licensing Information
This data is licensed by Google LLC under a Creative Commons Attribution 4.0 International License.
### Citation Information
```
@article{DBLP:journals/corr/abs-1711-05139,
author = {Amelie Royer and
Konstantinos Bousmalis and
Stephan Gouws and
Fred Bertsch and
Inbar Mosseri and
Forrester Cole and
Kevin Murphy},
title = {{XGAN:} Unsupervised Image-to-Image Translation for many-to-many Mappings},
journal = {CoRR},
volume = {abs/1711.05139},
year = {2017},
url = {http://arxiv.org/abs/1711.05139},
eprinttype = {arXiv},
eprint = {1711.05139},
timestamp = {Mon, 13 Aug 2018 16:47:38 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-1711-05139.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
### Contributions
| 5,664 | [
[
-0.04986572265625,
-0.0234222412109375,
0.000050187110900878906,
0.01213836669921875,
-0.03082275390625,
-0.00655364990234375,
-0.028839111328125,
-0.037139892578125,
0.038238525390625,
0.0301666259765625,
-0.044281005859375,
-0.056427001953125,
-0.0508728027343... |
indonesian-nlp/lfqa_id | 2022-03-19T11:50:02.000Z | [
"region:us"
] | indonesian-nlp | null | null | 2 | 3 | 2022-03-19T11:39:13 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
huggan/edges2shoes | 2022-04-12T14:18:05.000Z | [
"region:us"
] | huggan | null | null | 0 | 3 | 2022-03-23T16:12:59 | # Citation
```
@article{pix2pix2017,
title={Image-to-Image Translation with Conditional Adversarial Networks},
author={Isola, Phillip and Zhu, Jun-Yan and Zhou, Tinghui and Efros, Alexei A},
journal={CVPR},
year={2017}
}
``` | 232 | [
[
0.0026531219482421875,
-0.0191802978515625,
0.024688720703125,
0.002223968505859375,
-0.026824951171875,
-0.041259765625,
-0.0116729736328125,
-0.03485107421875,
-0.0062103271484375,
0.0161590576171875,
-0.009185791015625,
-0.0302886962890625,
-0.06756591796875,... |
sentence-transformers/NQ-retrieval | 2022-03-24T08:18:36.000Z | [
"region:us"
] | sentence-transformers | null | null | 0 | 3 | 2022-03-24T08:17:51 | #NQ-retrieval
This is a nicely formatted version of the [Natural Questions](https://ai.google.com/research/NaturalQuestions/) dataset, formatted to train and evaluate retrieval systems.
Each row contains the following entries:
- **question**: Original question send for Google Search Engine
- **title**: Title of Wikipedia article
- **candidates**: A list with the passages from the original Wikipedia HTML document
- **passage_types**: Types (text, table, list) of the candidate passages
- **long_answers**: IDs which candidate passages where selected as relevant from annotators. Might be empty if no relevant passage has been identified
- **document_url** | 670 | [
[
-0.029205322265625,
-0.059783935546875,
0.0283050537109375,
-0.0107879638671875,
-0.018646240234375,
-0.0123291015625,
0.00777435302734375,
0.0013103485107421875,
0.045135498046875,
0.052276611328125,
-0.05303955078125,
-0.0231170654296875,
-0.001674652099609375... |
laion/conceptual-captions-12m-webdataset | 2022-03-27T19:35:10.000Z | [
"region:us"
] | laion | null | null | 4 | 3 | 2022-03-27T15:31:53 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
tskolm/youtube_top_popular_videos_comments | 2022-03-29T10:46:48.000Z | [
"region:us"
] | tskolm | null | null | 0 | 3 | 2022-03-29T10:45:33 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
huggan/vangogh2photo | 2022-04-12T13:58:45.000Z | [
"arxiv:1703.10593",
"region:us"
] | huggan | null | null | 0 | 3 | 2022-03-29T12:33:03 | This dataset is part of the CycleGAN datasets, originally hosted here: https://people.eecs.berkeley.edu/~taesung_park/CycleGAN/datasets/
# Citation
```
@article{DBLP:journals/corr/ZhuPIE17,
author = {Jun{-}Yan Zhu and
Taesung Park and
Phillip Isola and
Alexei A. Efros},
title = {Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial
Networks},
journal = {CoRR},
volume = {abs/1703.10593},
year = {2017},
url = {http://arxiv.org/abs/1703.10593},
eprinttype = {arXiv},
eprint = {1703.10593},
timestamp = {Mon, 13 Aug 2018 16:48:06 +0200},
biburl = {https://dblp.org/rec/journals/corr/ZhuPIE17.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` | 799 | [
[
-0.00348663330078125,
-0.02215576171875,
0.01763916015625,
0.0002982616424560547,
-0.0276947021484375,
0.00079345703125,
-0.0090789794921875,
-0.024566650390625,
0.0032215118408203125,
0.04345703125,
-0.04632568359375,
-0.051910400390625,
-0.02978515625,
0.0... |
hackathon-pln-es/nli-es | 2022-04-04T03:30:59.000Z | [
"arxiv:1809.05053",
"region:us"
] | hackathon-pln-es | null | null | 2 | 3 | 2022-03-29T23:54:07 | annotations_creators:
- crowdsourced
- other
language_creators:
- other
- crowdsourced
languages:
- es
licenses:
- cc-by-sa-4.0
multilinguality:
- monolingual
pretty_name: ESnli
size_categories:
- unknown
source_datasets:
- extended|snli
- extended|xnli
- extended|multi_nli
task_categories:
- text-classification
task_ids:
- natural-language-inference
# Dataset Card for nli-es
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [Needs More Information]
- **Repository:** https://huggingface.co/datasets/hackathon-pln-es/nli-es/
- **Paper:** [Needs More Information]
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Needs More Information]
### Dataset Summary
A Spanish Natural Language Inference dataset put together from the sources:
- the Spanish slice of the XNLI dataset;
- machine-translated Spanish version of the SNLI dataset
- machine-translated Spanish version of the Multinli dataset
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
A small percentage of the dataset contains original Spanish text by human speakers. The rest was generated by automatic translation.
## Dataset Structure
### Data Instances
A line includes four values: a sentence1 (the premise); a sentence2 (the hypothesis); a label specifying the relationship between the two ("gold_label") and the ID number of the pair of sentences as given in the original dataset.
Labels can be "entailment" if the premise entails the hypothesis, "contradiction" if it contradicts it or "neutral" if it neither implies it nor denies it.
{
"gold_label": "neutral",
"pairID": 1,
"sentence1": "A ver si nos tenemos que poner todos en huelga hasta cobrar lo que queramos.",
"sentence2": "La huelga es el método de lucha más eficaz para conseguir mejoras en el salario."
}
### Data Fields
gold_label: A string defining the relation between the sentence pair. Labels can be "entailment" if the premise entails the hypothesis, "contradiction" if it contradicts it or "neutral" if it neither implies it nor denies it.
pairID: A string identifying a pair sentence. It was inherited from the original datasets. NOTE: For the moment we are having trouble loading this column so we replaced every string with an int 0 as a placeholder. We hope to have the pairID back up soon.
sentence1: A string containing one sentence in Spanish, the premise. (See gold_label.)
sentence2: A string containing one sentence in Spanish, the hypothesis. (See gold_label.)
### Data Splits
The whole dataset was used for training. We did not use an evaluation split as we used the SemEval-2015 Task 2.
## Dataset Creation
### Curation Rationale
This corpus was built to remedy the scarcity of annotated Spanish-language datasets for NLI. It was generated by translating from the SNLI original dataset to Spanish using Argos. While machine translation is far from an ideal source for semantic classification, it is an aid to enlarging the data available.
### Source Data
#### Initial Data Collection and Normalization
Please refer to the respective documentations of the original datasets:
https://nlp.stanford.edu/projects/snli/
https://arxiv.org/pdf/1809.05053.pdf
https://cims.nyu.edu/~sbowman/multinli/
#### Who are the source language producers?
Please refer to the respective documentations of the original datasets:
https://nlp.stanford.edu/projects/snli/
https://arxiv.org/pdf/1809.05053.pdf
https://cims.nyu.edu/~sbowman/multinli/
### Annotations
#### Annotation process
Please refer to the respective documentations of the original datasets:
https://nlp.stanford.edu/projects/snli/
https://arxiv.org/pdf/1809.05053.pdf
https://cims.nyu.edu/~sbowman/multinli/
#### Who are the annotators?
Please refer to the respective documentations of the original datasets:
https://nlp.stanford.edu/projects/snli/
https://arxiv.org/pdf/1809.05053.pdf
https://cims.nyu.edu/~sbowman/multinli/
### Personal and Sensitive Information
In general, no sensitive information is conveyed in the sentences.
Please refer to the respective documentations of the original datasets:
https://nlp.stanford.edu/projects/snli/
https://arxiv.org/pdf/1809.05053.pdf
https://cims.nyu.edu/~sbowman/multinli/
## Considerations for Using the Data
### Social Impact of Dataset
The purpose of this dataset is to offer new tools for semantic textual similarity analysis of Spanish sentences.
### Discussion of Biases
Please refer to the respective documentations of the original datasets:
https://nlp.stanford.edu/projects/snli/
https://arxiv.org/pdf/1809.05053.pdf
https://cims.nyu.edu/~sbowman/multinli/
### Other Known Limitations
The translation of the sentences was mostly unsupervised and may introduce some noise in the corpus. Machine translation from an English-language corpus is likely to generate syntactic and lexical forms that differ from those a human Spanish speaker would produce.
For discussion on the biases and limitations of the original datasets, please refer to their respective documentations:
https://nlp.stanford.edu/projects/snli/
https://arxiv.org/pdf/1809.05053.pdf
https://cims.nyu.edu/~sbowman/multinli/
## Additional Information
### Dataset Curators
The nli-es dataset was put together by Anibal Pérez, Lautaro Gesuelli, Mauricio Mazuecos and Emilio Tomás Ariza.
### Licensing Information
This corpus is licensed under a [Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License](https://creativecommons.org/licenses/by-nc-sa/4.0).
Please refer to the respective documentations of the original datasets for information on their licenses:
https://nlp.stanford.edu/projects/snli/
https://arxiv.org/pdf/1809.05053.pdf
https://cims.nyu.edu/~sbowman/multinli/
### Citation Information
If you need to cite this dataset, you can link to this readme. | 7,042 | [
[
-0.016845703125,
-0.0390625,
0.022735595703125,
0.035186767578125,
0.0035419464111328125,
-0.01332855224609375,
-0.034149169921875,
-0.042755126953125,
0.04827880859375,
0.045379638671875,
-0.053863525390625,
-0.0484619140625,
-0.04345703125,
0.0400085449218... |
h4iku/coconut_python2010 | 2023-09-28T23:17:32.000Z | [
"code",
"region:us"
] | h4iku | null | null | 0 | 3 | 2022-03-30T01:03:32 | ---
tags:
- code
pretty_name: CoCoNuT-Python(2010)
---
# Dataset Card for CoCoNuT-Python(2010)
## Dataset Description
- **Homepage:** [CoCoNuT training data](https://github.com/lin-tan/CoCoNut-Artifact/releases/tag/training_data_1.0.0)
- **Repository:** [CoCoNuT repository](https://github.com/lin-tan/CoCoNut-Artifact)
- **Paper:** [CoCoNuT: Combining Context-Aware Neural Translation Models using Ensemble for Program Repair](https://dl.acm.org/doi/abs/10.1145/3395363.3397369)
### Dataset Summary
Part of the data used to train the models in the "CoCoNuT: Combining Context-Aware Neural Translation Models using Ensemble for Program Repair" paper.
These datasets contain raw data extracted from GitHub, GitLab, and Bitbucket, and have neither been shuffled nor tokenized.
The year in the dataset’s name is the cutting year that shows the year of the newest commit in the dataset.
### Languages
- Python
## Dataset Structure
### Data Fields
The dataset consists of 4 columns: `add`, `rem`, `context`, and `meta`.
These match the original dataset files: `add.txt`, `rem.txt`, `context.txt`, and `meta.txt`.
### Data Instances
There is a mapping between the 4 columns for each instance.
For example:
5 first rows of `rem` (i.e., the buggy line/hunk):
```
1 public synchronized StringBuffer append(char ch)
2 ensureCapacity_unsynchronized(count + 1); value[count++] = ch; return this;
3 public String substring(int beginIndex, int endIndex)
4 if (beginIndex < 0 || endIndex > count || beginIndex > endIndex) throw new StringIndexOutOfBoundsException(); if (beginIndex == 0 && endIndex == count) return this; int len = endIndex - beginIndex; return new String(value, beginIndex + offset, len, (len << 2) >= value.length);
5 public Object next() {
```
5 first rows of add (i.e., the fixed line/hunk):
```
1 public StringBuffer append(Object obj)
2 return append(obj == null ? "null" : obj.toString());
3 public String substring(int begin)
4 return substring(begin, count);
5 public FSEntry next() {
```
These map to the 5 instances:
```diff
- public synchronized StringBuffer append(char ch)
+ public StringBuffer append(Object obj)
```
```diff
- ensureCapacity_unsynchronized(count + 1); value[count++] = ch; return this;
+ return append(obj == null ? "null" : obj.toString());
```
```diff
- public String substring(int beginIndex, int endIndex)
+ public String substring(int begin)
```
```diff
- if (beginIndex < 0 || endIndex > count || beginIndex > endIndex) throw new StringIndexOutOfBoundsException(); if (beginIndex == 0 && endIndex == count) return this; int len = endIndex - beginIndex; return new String(value, beginIndex + offset, len, (len << 2) >= value.length);
+ return substring(begin, count);
```
```diff
- public Object next() {
+ public FSEntry next() {
```
`context` contains the associated "context". Context is the (in-lined) buggy function (including the buggy lines and comments).
For example, the context of
```
public synchronized StringBuffer append(char ch)
```
is its associated function:
```java
public synchronized StringBuffer append(char ch) { ensureCapacity_unsynchronized(count + 1); value[count++] = ch; return this; }
```
`meta` contains some metadata about the project:
```
1056 /local/tlutelli/issta_data/temp/all_java0context/java/2006_temp/2006/1056/68a6301301378680519f2b146daec37812a1bc22/StringBuffer.java/buggy/core/src/classpath/java/java/lang/StringBuffer.java
```
`1056` is the project id. `/local/...` is the absolute path to the buggy file. This can be parsed to extract the commit id: `68a6301301378680519f2b146daec37812a1bc22`, the file name: `StringBuffer.java` and the original path within the project
`core/src/classpath/java/java/lang/StringBuffer.java`
| Number of projects | Number of Instances |
| ------------------ |-------------------- |
| 13,899 | 480,777 |
## Dataset Creation
### Curation Rationale
Data is collected to train automated program repair (APR) models.
### Citation Information
```bib
@inproceedings{lutellierCoCoNuTCombiningContextaware2020,
title = {{{CoCoNuT}}: Combining Context-Aware Neural Translation Models Using Ensemble for Program Repair},
shorttitle = {{{CoCoNuT}}},
booktitle = {Proceedings of the 29th {{ACM SIGSOFT International Symposium}} on {{Software Testing}} and {{Analysis}}},
author = {Lutellier, Thibaud and Pham, Hung Viet and Pang, Lawrence and Li, Yitong and Wei, Moshi and Tan, Lin},
year = {2020},
month = jul,
series = {{{ISSTA}} 2020},
pages = {101--114},
publisher = {{Association for Computing Machinery}},
address = {{New York, NY, USA}},
doi = {10.1145/3395363.3397369},
url = {https://doi.org/10.1145/3395363.3397369},
urldate = {2022-12-06},
isbn = {978-1-4503-8008-9},
keywords = {AI and Software Engineering,Automated program repair,Deep Learning,Neural Machine Translation}
}
```
| 4,899 | [
[
-0.0281982421875,
-0.054351806640625,
0.01471710205078125,
0.013214111328125,
-0.0275421142578125,
0.01235198974609375,
-0.0223388671875,
-0.0377197265625,
0.0183258056640625,
0.0229034423828125,
-0.034576416015625,
-0.039642333984375,
-0.03546142578125,
0.0... |
MLCommons/peoples_speech_v1.0 | 2022-08-10T16:41:34.000Z | [
"task_categories:automatic-speech-recognition",
"annotations_creators:crowdsourced",
"annotations_creators:machine-generated",
"language_creators:crowdsourced",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:1T<n",
"source_datasets:original",
"language:en",
... | MLCommons | null | null | 6 | 3 | 2022-03-30T15:49:51 | ---
annotations_creators:
- crowdsourced
- machine-generated
language_creators:
- crowdsourced
- machine-generated
language:
- en
license:
- cc-by-2.0
- cc-by-2.5
- cc-by-3.0
- cc-by-4.0
- cc-by-sa-3.0
- cc-by-sa-4.0
multilinguality:
- monolingual
pretty_name: People's Speech
size_categories:
- 1T<n
source_datasets:
- original
task_categories:
- automatic-speech-recognition
task_ids:
- speech-recognition
- robust-speech-recognition
- noisy-speech-recognition
---
# Dataset Card for People's Speech
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://mlcommons.org/en/peoples-speech/
- **Repository:** https://github.com/mlcommons/peoples-speech
- **Paper:** https://arxiv.org/abs/2111.09344
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [datasets@mlcommons.org](mailto:datasets@mlcommons.org)
### Dataset Summary
The People's Speech Dataset is among the world's largest English speech recognition corpus today that is licensed for academic and commercial usage under CC-BY-SA and CC-BY 4.0. It includes 30,000+ hours of transcribed speech in English languages with a diverse set of speakers. This open dataset is large enough to train speech-to-text systems and crucially is available with a permissive license.
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
English
## Dataset Structure
### Data Instances
{
"id": "gov_DOT_uscourts_DOT_scotus_DOT_19-161/gov_DOT_uscourts_DOT_scotus_DOT_19-161_DOT_2020-03-02_DOT_mp3_00002.flac",
"audio": {
"path": "gov_DOT_uscourts_DOT_scotus_DOT_19-161/gov_DOT_uscourts_DOT_scotus_DOT_19-161_DOT_2020-03-02_DOT_mp3_00002.flac"
"array": array([-6.10351562e-05, ...]),
"sampling_rate": 16000
}
"duration_ms": 14490,
"text": "contends that the suspension clause requires a [...]"
}
### Data Fields
{
"id": datasets.Value("string"),
"audio": datasets.Audio(sampling_rate=16_000),
"duration_ms": datasets.Value("int32"),
"text": datasets.Value("string"),
}
### Data Splits
We provide the following configurations for the dataset: `cc-by-clean`, `cc-by-dirty`, `cc-by-sa-clean`, `cc-by-sa-dirty`, and `microset`. We don't provide splits for any of the configurations.
## Dataset Creation
### Curation Rationale
See our [paper](https://arxiv.org/abs/2111.09344).
### Source Data
#### Initial Data Collection and Normalization
Data was downloaded via the archive.org API. No data inference was done.
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
No manual annotation is done. We download only source audio with already existing transcripts.
#### Who are the annotators?
For the test and dev sets, we paid native American English speakers to do transcriptions. We do not know the identities of the transcriptionists for data in the training set. For the training set, we have noticed that some transcriptions are likely to be the output of automatic speech recognition systems.
### Personal and Sensitive Information
Several of our sources are legal and government proceedings, spoken histories, speeches, and so on. Given that these were intended as public documents and licensed as such, it is natural that the involved individuals are aware of this.
## Considerations for Using the Data
### Social Impact of Dataset
The dataset could be used for speech synthesis. However, this requires careful cleaning of the dataset, as background noise is not tolerable for speech synthesis.
The dataset could be used for keyword spotting tasks as well. In particular, this is good use case for the non-English audio in the dataset.
Our sincere hope is that the large breadth of sources our dataset incorporates reduces existing quality of service issues today, like speech recognition system’s poor understanding of non-native English accents. We cannot think of any unfair treatment that come from using this dataset at this time.
### Discussion of Biases
Our data is downloaded from archive.org. As such, the data is biased towards whatever users decide to upload there.
Almost all of our data is American accented English.
### Other Known Limitations
As of version 1.0, a portion of data in the training, test, and dev sets is poorly aligned. Specifically, some words appear in the transcript, but not the audio, or some words appear in the audio, but not the transcript. We are working on it.
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
We provide CC-BY and CC-BY-SA subsets of the dataset.
### Citation Information
Please cite:
```
@article{DBLP:journals/corr/abs-2111-09344,
author = {Daniel Galvez and
Greg Diamos and
Juan Ciro and
Juan Felipe Cer{\'{o}}n and
Keith Achorn and
Anjali Gopi and
David Kanter and
Maximilian Lam and
Mark Mazumder and
Vijay Janapa Reddi},
title = {The People's Speech: {A} Large-Scale Diverse English Speech Recognition
Dataset for Commercial Usage},
journal = {CoRR},
volume = {abs/2111.09344},
year = {2021},
url = {https://arxiv.org/abs/2111.09344},
eprinttype = {arXiv},
eprint = {2111.09344},
timestamp = {Mon, 22 Nov 2021 16:44:07 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-2111-09344.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` | 6,521 | [
[
-0.031982421875,
-0.03033447265625,
-0.003177642822265625,
0.011993408203125,
-0.024505615234375,
0.005367279052734375,
-0.04254150390625,
-0.0335693359375,
0.037506103515625,
0.03485107421875,
-0.0367431640625,
-0.0662841796875,
-0.038482666015625,
0.015571... |
tomekkorbak/pile-toxicity-balanced | 2022-04-06T11:07:05.000Z | [
"region:us"
] | tomekkorbak | null | null | 0 | 3 | 2022-03-31T12:43:11 | ## Generation procedure
The dataset was constructed using documents from [the Pile](https://pile.eleuther.ai/) scored using using [Perspective API](http://perspectiveapi.com) toxicity scores.
The procedure was the following:
1. A chunk of the Pile (3%, 7m documents) was scored using the Perspective API.
1. The first half of this dataset is [tomekkorbak/pile-toxic-chunk-0](https://huggingface.co/datasets/tomekkorbak/pile-toxic-chunk-0), 100k *most* toxic documents of the scored chunk
2. The first half of this dataset is [tomekkorbak/pile-nontoxic-chunk-0](https://huggingface.co/datasets/tomekkorbak/pile-nontoxic-chunk-0), 100k *least* toxic documents of the scored chunk
3. Then, the dataset was shuffled and a 9:1 train-test split was done
## Basic stats
The average scores of the good and bad half are 0.0014 and 0.67, respectively. The average score of the whole dataset is 0.33; the median is 0.51.
However, the weighted average score (weighted by document length) is 0.45. Correlation between score and document length is 0.2.
Score histogram:

Mean score per Pile subset
| pile_set_name | score | length |
|:------------------|----------:|------------:|
| ArXiv | 0.141808 | 9963.82 |
| Books3 | 0.405541 | 8911.67 |
| DM Mathematics | 0.535474 | 8194 |
| Enron Emails | 0.541136 | 1406.76 |
| EuroParl | 0.373395 | 4984.36 |
| FreeLaw | 0.279582 | 8986.73 |
| Github | 0.495742 | 2184.86 |
| Gutenberg (PG-19) | 0.583263 | 4034 |
| HackerNews | 0.617917 | 3714.83 |
| NIH ExPorter | 0.0376628 | 1278.83 |
| OpenSubtitles | 0.674261 | 14881.1 |
| OpenWebText2 | 0.613273 | 2634.41 |
| PhilPapers | 0.549582 | 9693 |
| Pile-CC | 0.525136 | 2925.7 |
| PubMed Abstracts | 0.0388705 | 1282.29 |
| PubMed Central | 0.235012 | 7418.34 |
| StackExchange | 0.590904 | 2210.16 |
| USPTO Backgrounds | 0.0100077 | 2086.39 |
| Ubuntu IRC | 0.598423 | 4396.67 |
| Wikipedia (en) | 0.0136901 | 1515.89 |
| YoutubeSubtitles | 0.65201 | 4729.52 | | 2,287 | [
[
-0.026702880859375,
-0.03765869140625,
0.029937744140625,
0.01009368896484375,
-0.0216064453125,
-0.0233917236328125,
0.0166473388671875,
-0.02264404296875,
0.036407470703125,
0.040557861328125,
-0.0277862548828125,
-0.0777587890625,
-0.043487548828125,
0.00... |
tan9/bioasq | 2022-04-01T09:40:24.000Z | [
"region:us"
] | tan9 | null | null | 0 | 3 | 2022-04-01T08:25:44 | annotations_creators:
- expert-generated
- machine-generated
language_creators:
- expert-generated
- machine-generated
languages: []
licenses:
- other-my-license
multilinguality:
- monolingual
pretty_name: bioasq
size_categories:
- unknown
source_datasets:
- extended|pubmed_qa
task_categories:
- question-answering
task_ids:
- multiple-choice-qa | 346 | [
[
-0.0297698974609375,
-0.037261962890625,
0.0287933349609375,
0.0252685546875,
-0.0156402587890625,
0.0254669189453125,
0.0012664794921875,
-0.03399658203125,
0.0426025390625,
0.06365966796875,
-0.041351318359375,
-0.039276123046875,
-0.042938232421875,
0.056... |
iluvvatar/RuNNE | 2023-03-30T13:36:53.000Z | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"multilinguality:monolingual",
"language:ru",
"arxiv:2108.13112",
"region:us"
] | iluvvatar | null | null | 2 | 3 | 2022-04-02T07:55:42 | ---
language:
- ru
multilinguality:
- monolingual
task_categories:
- token-classification
task_ids:
- named-entity-recognition
pretty_name: RuNNE
---
# RuNNE dataset
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Structure](#dataset-structure)
- [Citation Information](#citation-information)
- [Contacts](#contacts)
## Dataset Description
Part of NEREL dataset (https://arxiv.org/abs/2108.13112), a Russian dataset
for named entity recognition and relation extraction, used in RuNNE (2022)
competition (https://github.com/dialogue-evaluation/RuNNE).
Entities may be nested (see https://arxiv.org/abs/2108.13112).
Entity types list:
* AGE
* AWARD
* CITY
* COUNTRY
* CRIME
* DATE
* DISEASE
* DISTRICT
* EVENT
* FACILITY
* FAMILY
* IDEOLOGY
* LANGUAGE
* LAW
* LOCATION
* MONEY
* NATIONALITY
* NUMBER
* ORDINAL
* ORGANIZATION
* PENALTY
* PERCENT
* PERSON
* PRODUCT
* PROFESSION
* RELIGION
* STATE_OR_PROVINCE
* TIME
* WORK_OF_ART
## Dataset Structure
There are two "configs" or "subsets" of the dataset.
Using
`load_dataset('MalakhovIlya/RuNNE', 'ent_types')['ent_types']`
you can download list of entity types (
Dataset({
features: ['type'],
num_rows: 29
})
)
Using
`load_dataset('MalakhovIlya/RuNNE', 'data')` or `load_dataset('MalakhovIlya/RuNNE')`
you can download the data itself (DatasetDict)
Dataset consists of 3 splits: "train", "test" and "dev". Each of them contains text document. "Train" and "test" splits also contain annotated entities, "dev" doesn't.
Each entity is represented by a string of the following format: "\<start> \<stop> \<type>", where \<start> is a position of the first symbol of entity in text, \<stop> is the last symbol position in text and \<type> is a one of the aforementioned list of types.
P.S.
Original NEREL dataset also contains relations, events and linked entities, but they were not added here yet ¯\\\_(ツ)_/¯
## Citation Information
@article{Artemova2022runne,
title={{RuNNE-2022 Shared Task: Recognizing Nested Named Entities}},
author={Artemova, Ekaterina and Zmeev, Maksim and Loukachevitch, Natalia and Rozhkov, Igor and Batura, Tatiana and Braslavski, Pavel and Ivanov, Vladimir and Tutubalina, Elena},
journal={Computational Linguistics and Intellectual Technologies: Proceedings of the International Conference "Dialog"},
year={2022}
}
| 2,349 | [
[
-0.019134521484375,
-0.0269927978515625,
0.024261474609375,
0.00603485107421875,
-0.021209716796875,
-0.0089111328125,
-0.01837158203125,
-0.0266571044921875,
0.0389404296875,
0.047943115234375,
-0.04266357421875,
-0.057403564453125,
-0.031982421875,
0.02946... |
hackathon-pln-es/spanish-to-quechua | 2022-10-25T10:03:46.000Z | [
"task_categories:translation",
"language:es",
"language:qu",
"region:us"
] | hackathon-pln-es | null | null | 6 | 3 | 2022-04-03T04:02:58 | ---
language:
- es
- qu
task_categories:
- translation
task:
- translation
---
# Spanish to Quechua
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Structure](#dataset-structure)
- [Dataset Creation](#dataset-creation)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [team members](#team-members)
## Dataset Description
This dataset is a recopilation of webs and others datasets that shows in [dataset creation section](#dataset-creation). This contains translations from spanish (es) to Qechua of Ayacucho (qu).
## Dataset Structure
### Data Fields
- es: The sentence in Spanish.
- qu: The sentence in Quechua of Ayacucho.
### Data Splits
- train: To train the model (102 747 sentences).
- Validation: To validate the model during training (12 844 sentences).
- test: To evaluate the model when the training is finished (12 843 sentences).
## Dataset Creation
### Source Data
This dataset has generated from:
- "Mundo Quechua" by "Ivan Acuña" - [available here](https://mundoquechua.blogspot.com/2006/07/frases-comunes-en-quechua.html)
- "Kuyakuykim (Te quiero): Apps con las que podrías aprender quechua" by "El comercio" - [available here](https://elcomercio.pe/tecnologia/actualidad/traductor-frases-romanticas-quechua-noticia-467022-noticia/)
- "Piropos y frases de amor en quechua" by "Soy Quechua" - [available here](https://www.soyquechua.org/2019/12/palabras-en-quechua-de-amor.html)
- "Corazón en quechua" by "Soy Quechua" - [available here](https://www.soyquechua.org/2020/05/corazon-en-quechua.html)
- "Oraciones en Español traducidas a Quechua" by "Tatoeba" - [available here](https://tatoeba.org/es/sentences/search?from=spa&query=&to=que)
- "AmericasNLP 2021 Shared Task on Open Machine Translation" by "americasnlp2021" - [available here](https://github.com/AmericasNLP/americasnlp2021/tree/main/data/quechua-spanish/parallel_data/es-quy)
### Data cleaning
- The dataset was manually cleaned during compilation, as some words of one language were related to several words of the other language.
## Considerations for Using the Data
This is a first version of the dataset, we expected improve it over time and especially to neutralize the biblical themes.
## Team members
- [Sara Benel](https://huggingface.co/sbenel)
- [Jose Vílchez](https://huggingface.co/JCarlos) | 2,351 | [
[
-0.020416259765625,
-0.04022216796875,
0.0005350112915039062,
0.0570068359375,
-0.0285186767578125,
0.007770538330078125,
-0.0221710205078125,
-0.038360595703125,
0.0266876220703125,
0.044769287109375,
-0.050872802734375,
-0.06365966796875,
-0.0193634033203125,
... |
hackathon-pln-es/unam_tesis | 2023-10-11T14:57:54.000Z | [
"task_categories:text-classification",
"task_ids:language-modeling",
"annotations_creators:MajorIsaiah",
"annotations_creators:Ximyer",
"annotations_creators:clavel",
"annotations_creators:inoid",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:n=200",
"source_dat... | hackathon-pln-es | null | null | 5 | 3 | 2022-04-03T23:25:31 | ---
annotations_creators:
- MajorIsaiah
- Ximyer
- clavel
- inoid
language_creators: [crowdsourced]
language: [es]
license: [apache-2.0]
multilinguality: [monolingual]
pretty_name: UNAM Tesis
size_categories:
- n=200
source_datasets: [original]
task_categories: [text-classification]
task_ids: [language-modeling]
---
# Dataset Card of "unam_tesis"
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
- [yiselclavel@gmail.com](mailto:yiselclavel@gmail.com)
- [isaac7isaias@gmail.com](mailto:isaac7isaias@gmail.com)
### Dataset Summary
El dataset unam_tesis cuenta con 1000 tesis de 5 carreras de la Universidad Nacional Autónoma de México (UNAM), 200 por carrera. Se pretende seguir incrementando este dataset con las demás carreras y más tesis.
### Supported Tasks and Leaderboards
text-classification
### Languages
Español (es)
## Dataset Structure
### Data Instances
Las instancias del dataset son de la siguiente forma:
El objetivo de esta tesis es elaborar un estudio de las condiciones asociadas al aprendizaje desde casa a nivel preescolar y primaria en el municipio de Nicolás Romero a partir de la cancelación de clases presenciales ante la contingencia sanitaria del Covid-19 y el entorno familiar del estudiante. En México, la Encuesta para la Medición del Impacto COVID-19 en la Educación (ECOVID-ED) 2020, es un proyecto que propone el INEGI y realiza de manera especial para conocer las necesidades de la población estudiantil de 3 a 29 años de edad, saber qué está sucediendo con su entorno inmediato, las condiciones en las que desarrollan sus actividades académicas y el apoyo que realizan padres, tutores o cuidadores principales de las personas en edad formativa. La ECOVID-ED 2020 se llevó a cabo de manera especial con el objetivo de conocer el impacto de la cancelación provisional de clases presenciales en las instituciones educativas del país para evitar los contagios por la pandemia COVID-19 en la experiencia educativa de niños, niñas, adolescentes y jóvenes de 3 a 29 años, tanto en el ciclo escolar 2019-2020, como en ciclo 2020-2021. En este ámbito de investigación, el Instituto de Investigaciones sobre la Universidad y la Educación (IISUE) de la Universidad Nacional Autónoma de México público en 2020 la obra “Educación y Pandemia: Una visión académica” que se integran 34 trabajos que abordan la muy amplia temática de la educación y la universidad con reflexiones y ejercicios analíticos estrechamente relacionadas en el marco coyuntural de la pandemia COVID-19. La tesis se presenta en tres capítulos: En el capítulo uno se realizará una descripción del aprendizaje de los estudiantes a nivel preescolar y primaria del municipio de NicolásRomero, Estado de México, que por motivo de la contingencia sanitaria contra el Covid-19 tuvieron que concluir su ciclo académico 2019-2020 y el actual ciclo 2020-2021 en su casa debido a la cancelación provisional de clases presenciales y bajo la tutoría de padres, familiar o ser cercano; así como las horas destinadas al estudio y las herramientas tecnológicas como teléfonos inteligentes, computadoras portátiles, computadoras de escritorio, televisión digital y tableta. En el capítulo dos, se presentarán las herramientas necesarias para la captación de la información mediante técnicas de investigación social, a través de las cuales se mencionará, la descripción, contexto y propuestas del mismo, considerando los diferentes tipos de cuestionarios, sus componentes y diseño, teniendo así de manera específica la diversidad de ellos, que llevarán como finalidad realizar el cuestionario en línea para la presente investigación. Posteriormente, se podrá destacar las fases del diseño de la investigación, que se realizarán mediante una prueba piloto tomando como muestra a distintos expertos en el tema. De esta manera se obtendrá la información relevante para estudiarla a profundidad. En el capítulo tres, se realizará el análisis apoyado de las herramientas estadísticas, las cuales ofrecen explorar la muestra de una manera relevante, se aplicará el método inferencial para expresar la información y predecir las condiciones asociadas al autoaprendizaje, la habilidad pedagógica de padres o tutores, la convivencia familiar, la carga académica y actividades escolares y condicionamiento tecnológico,con la finalidad de inferir en la población. Asimismo, se realizarán pruebas de hipótesis, tablas de contingencia y matriz de correlación. Por consiguiente, los resultados obtenidos de las estadísticas se interpretarán para describir las condiciones asociadas y como impactan en la enseñanza de preescolar y primaria desde casa.|María de los Ángeles|Blancas Regalado|Análisis de las condiciones del aprendizaje desde casa en los alumnos de preescolar y primaria del municipio de Nicolás Romero |2022|Actuaría
| Carreras | Número de instancias |
|--------------|----------------------|
| Actuaría | 200 |
| Derecho| 200 |
| Economía| 200 |
| Psicología| 200 |
| Química Farmacéutico Biológica| 200 |
### Data Fields
El dataset está compuesto por los siguientes campos: "texto|titulo|carrera". <br/>
texto: Se refiere al texto de la introducción de la tesis. <br/>
titulo: Se refiere al título de la tesis. <br/>
carrera: Se refiere al nombre de la carrera a la que pertenece la tesis. <br/>
### Data Splits
El dataset tiene 2 particiones: entrenamiento (train) y prueba (test).
| Partición | Número de instancias |
|--------------|-------------------|
| Entrenamiento | 800 |
| Prueba | 200 |
## Dataset Creation
### Curation Rationale
La creación de este dataset ha sido motivada por la participación en el Hackathon 2022 de PLN en Español organizado por Somos NLP, con el objetivo de democratizar el NLP en español y promover su aplicación a buenas causas y, debido a que no existe un dataset de tesis en español.
### Source Data
#### Initial Data Collection and Normalization
El dataset original (dataset_tesis) fue creado a partir de un proceso de scraping donde se extrajeron tesis de la Universidad Nacional Autónoma de México en el siguiente link: https://tesiunam.dgb.unam.mx/F?func=find-b-0&local_base=TES01.
Se optó por realizar un scraper para conseguir la información. Se decidió usar la base de datos TESIUNAM, la cual es un catálogo en donde se pueden visualizar las tesis de los sustentantes que obtuvieron un grado en la UNAM, así como de las tesis de licenciatura de escuelas incorporadas a ella.
Para ello, en primer lugar se consultó la Oferta Académica (http://oferta.unam.mx/indice-alfabetico.html) de la Universidad, sitio de donde se extrajo cada una de las 131 licenciaturas en forma de lista. Después, se analizó cada uno de los casos presente en la base de datos, debido a que existen carreras con más de 10 tesis, otras con menos de 10, o con solo una o ninguna tesis disponible. Se usó Selenium para la interacción con un navegador Web (Edge) y está actualmente configurado para obtener las primeras 20 tesis, o menos, por carrera.
Este scraper obtiene de esta base de datos:
- Nombres del Autor
- Apellidos del Autor
- Título de la Tesis
- Año de la Tesis
- Carrera de la Tesis
A la vez, este scraper descarga cada una de las tesis en la carpeta Downloads del equipo local. En el csv formado por el scraper se añadió el "Resumen/Introduccion/Conclusion de la tesis", dependiendo cual primero estuviera disponible, ya que la complejidad recae en la diferencia de la estructura y formato de cada una de las tesis.
#### Who are the source language producers?
Los datos son creados por humanos de forma manual, en este caso por estudiantes de la UNAM y revisados por sus supervisores.
### Annotations
El dataset fue procesado para eliminar información innecesaria para los clasificadores. El dataset original cuenta con los siguientes campos: "texto|autor_nombre|autor_apellido|titulo|año|carrera".
#### Annotation process
Se extrajeron primeramente 200 tesis de 5 carreras de esta universidad: Actuaría, Derecho, Economía, Psicología y Química Farmacéutico Biológica. De estas se extrajo: introducción, nombre del autor, apellidos de autor, título de la tesis y la carrera. Los datos fueron revisados y limpiados por los autores.
Luego, el dataset fue procesado con las siguientes tareas de Procesamiento de Lenguaje Natural (dataset_tesis_procesado):
- convertir a minúsculas
- tokenización
- eliminar palabras que no son alfanuméricas
- eliminar palabras vacías
- stemming: eliminar plurales
#### Who are the annotators?
Las anotaciones fueron hechas por humanos, en este caso los autores del dataset, usando código de máquina en el lenguaje Python.
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
El presente conjunto de datos favorecerá la búsqueda e investigación relacionada con tesis en español, a partir de su categorización automática por un modelo entrenado con este dataset. Esta tarea favorece el cumplimiento del objetivo 4 de Desarrollo Sostenible de la ONU: Educación y Calidad (https://www.un.org/sustainabledevelopment/es/objetivos-de-desarrollo-sostenible/).
### Discussion of Biases
El texto tiene algunos errores en la codificación por lo que algunos caracteres como las tildes no se muestran correctamente. Las palabras con estos caracteres son eliminadas en el procesamiento hasta que se corrija el problema.
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
Miembros del equipo (user de Hugging Face):
[Isacc Isahias López López](https://huggingface.co/MajorIsaiah)
[Yisel Clavel Quintero](https://huggingface.co/clavel)
[Dionis López](https://huggingface.co/inoid)
[Ximena Yeraldin López López](https://huggingface.co/Ximyer)
### Licensing Information
La versión 1.0.0 del dataset unam_tesis está liberada bajo la licencia <a href='http://www.apache.org/licenses/LICENSE-2.0'/> Apache-2.0 License </a>.
### Citation Information
"Esta base de datos se ha creado en el marco del Hackathon 2022 de PLN en Español organizado por Somos NLP patrocinado por Platzi, Paperspace y Hugging Face: https://huggingface.co/hackathon-pln-es."
Para citar este dataset, por favor, use el siguiente formato de cita:
@inproceedings{Hackathon 2022 de PLN en Español,
title={UNAM's Theses with BETO fine-tuning classify},
author={López López, Isaac Isaías; Clavel Quintero, Yisel; López Ramos, Dionis & López López, Ximena Yeraldin},
booktitle={Hackathon 2022 de PLN en Español},
year={2022}
}
### Contributions
Gracias a [@yiselclavel](https://github.com/yiselclavel) y [@IsaacIsaias](https://github.com/IsaacIsaias) por agregar este dataset.
| 10,853 | [
[
-0.04437255859375,
-0.0484619140625,
0.012969970703125,
0.01068115234375,
-0.011016845703125,
0.0180816650390625,
-0.01035308837890625,
-0.032623291015625,
0.049713134765625,
-0.006977081298828125,
-0.03619384765625,
-0.05108642578125,
-0.02166748046875,
0.0... |
huggan/inat_butterflies_top10k | 2022-04-04T12:50:28.000Z | [
"region:us"
] | huggan | null | null | 1 | 3 | 2022-04-04T12:45:06 | Filtered version of https://huggingface.co/datasets/huggan/inat_butterflies
To pick the best images, CLIP was used to compare each image with a text description of a good image ("")
Notebook for the filtering: https://colab.research.google.com/drive/1OEqr1TtL4YJhdj_bebNWXRuG3f2YqtQE?usp=sharing
See the original dataset for sources and licence caveats (tl;dr check the image descriptions to make sure you aren't breaking a licence like CC-BY-NC-ND which some images have) | 475 | [
[
-0.058990478515625,
-0.038116455078125,
0.01287841796875,
0.0345458984375,
-0.04254150390625,
-0.005199432373046875,
0.0140533447265625,
-0.0523681640625,
0.06744384765625,
0.04522705078125,
-0.04229736328125,
-0.0223388671875,
-0.041290283203125,
0.04440307... |
ramnika003/autotrain-data-sentiment_analysis_project | 2022-04-05T09:16:59.000Z | [
"task_categories:text-classification",
"region:us"
] | ramnika003 | null | null | 0 | 3 | 2022-04-05T09:13:43 | ---
task_categories:
- text-classification
---
# AutoTrain Dataset for project: sentiment_analysis_project
## Dataset Descritpion
This dataset has been automatically processed by AutoTrain for project sentiment_analysis_project.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"text": "Realizing that I don`t have school today... or tomorrow... or for the next few months. I really nee[...]",
"target": 1
},
{
"text": "Good morning tweeps. Busy this a.m. but not in a working way",
"target": 2
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"text": "Value(dtype='string', id=None)",
"target": "ClassLabel(num_classes=3, names=['negative', 'neutral', 'positive'], id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 16180 |
| valid | 4047 |
| 1,124 | [
[
-0.0316162109375,
0.001190185546875,
0.007965087890625,
0.032470703125,
-0.025238037109375,
0.0267486572265625,
-0.0167083740234375,
-0.007343292236328125,
0.010040283203125,
0.01520538330078125,
-0.049896240234375,
-0.062286376953125,
-0.0413818359375,
0.00... |
huggingnft/cryptopunks | 2022-04-16T17:59:07.000Z | [
"license:mit",
"huggingnft",
"nft",
"huggan",
"gan",
"image",
"images",
"region:us"
] | huggingnft | null | null | 4 | 3 | 2022-04-10T08:52:12 | ---
tags:
- huggingnft
- nft
- huggan
- gan
- image
- images
task:
- unconditional-image-generation
datasets:
- huggingnft/cryptopunks
license: mit
---
# Dataset Card
## Disclaimer
All rights belong to their owners.
Models and datasets can be removed from the site at the request of the copyright holder.
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [About](#about)
## Dataset Description
- **Homepage:** [https://github.com/AlekseyKorshuk/huggingnft](https://github.com/AlekseyKorshuk/huggingnft)
- **Repository:** [https://github.com/AlekseyKorshuk/huggingnft](https://github.com/AlekseyKorshuk/huggingnft)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Dataset Summary
NFT images dataset for unconditional generation.
NFT collection available [here](https://opensea.io/collection/cryptopunks).
Model is available [here](https://huggingface.co/huggingnft/cryptopunks).
Check Space: [link](https://huggingface.co/spaces/AlekseyKorshuk/huggingnft).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## How to use
How to load this dataset directly with the datasets library:
```python
from datasets import load_dataset
dataset = load_dataset("huggingnft/cryptopunks")
```
## Dataset Structure
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Data Fields
The data fields are the same among all splits.
- `image`: an `image` feature.
- `id`: an `int` feature.
- `token_metadata`: a `str` feature.
- `image_original_url`: a `str` feature.
### Data Splits
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{huggingnft,
author={Aleksey Korshuk}
year=2022
}
```
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingnft)
| 5,907 | [
[
-0.04974365234375,
-0.048583984375,
0.01061248779296875,
0.0209197998046875,
-0.0313720703125,
0.01042938232421875,
-0.01425933837890625,
-0.044158935546875,
0.062042236328125,
0.0310821533203125,
-0.06353759765625,
-0.06878662109375,
-0.048675537109375,
0.0... |
arjundd/skm-tea-mini | 2022-05-02T20:01:34.000Z | [
"language:en",
"license:other",
"mri",
"quantitative mri",
"reconstruction",
"segmentation",
"detection",
"arxiv:2203.06823",
"region:us"
] | arjundd | null | null | 0 | 3 | 2022-04-10T17:16:33 | ---
language: en
license: other
tags:
- mri
- quantitative mri
- reconstruction
- segmentation
- detection
---
# SKM-TEA Sample Data
This dataset consists of a subset of scans from the [SKM-TEA dataset](https://arxiv.org/abs/2203.06823). It can be used to build tutorials / demos with the SKM-TEA dataset.
To access to the full dataset, please follow instructions on [Github](https://github.com/StanfordMIMI/skm-tea/blob/main/DATASET.md).
**NOTE**: This dataset subset *should not* be used for reporting/publishing metrics. All metrics should be computed on the full SKM-TEA test split.
## Details
This mini dataset (~30GB) consists of 2 training scans, 1 validation scan, and 1 test scan from the SKM-TEA dataset. HDF5 files for the Raw Data Track are [lzf-compressed](http://www.h5py.org/lzf/) to reduce size while maximizing speed for decompression.
## License
By using this dataset, you agree to the [Stanford University Dataset Research Use Agreement](https://stanfordaimi.azurewebsites.net/datasets/4aaeafb9-c6e6-4e3c-9188-3aaaf0e0a9e7).
## Reference
If you use this dataset, please reference the SKM-TEA paper:
```
@inproceedings{
desai2021skmtea,
title={{SKM}-{TEA}: A Dataset for Accelerated {MRI} Reconstruction with Dense Image Labels for Quantitative Clinical Evaluation},
author={Arjun D Desai and Andrew M Schmidt and Elka B Rubin and Christopher Michael Sandino and Marianne Susan Black and Valentina Mazzoli and Kathryn J Stevens and Robert Boutin and Christopher Re and Garry E Gold and Brian Hargreaves and Akshay Chaudhari},
booktitle={Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2)},
year={2021},
url={https://openreview.net/forum?id=YDMFgD_qJuA}
}
```
| 1,736 | [
[
-0.0170135498046875,
-0.023040771484375,
0.01143646240234375,
0.01436614990234375,
-0.0380859375,
0.000025212764739990234,
0.0009250640869140625,
-0.02349853515625,
0.0110015869140625,
0.0333251953125,
-0.044586181640625,
-0.03448486328125,
-0.020660400390625,
... |
raquiba/Sarcasm_News_Headline | 2022-04-14T08:19:08.000Z | [
"region:us"
] | raquiba | null | null | 2 | 3 | 2022-04-12T03:50:36 | Past studies in Sarcasm Detection mostly make use of Twitter datasets collected using hashtag based supervision but such datasets are noisy in terms of labels and language. Furthermore, many tweets are replies to other tweets and detecting sarcasm in these requires the availability of contextual tweets.
To overcome the limitations related to noise in Twitter datasets, this Headlines dataset for Sarcasm Detection is collected from two news website. TheOnion aims at producing sarcastic versions of current events and we collected all the headlines from News in Brief and News in Photos categories (which are sarcastic). We collect real (and non-sarcastic) news headlines from HuffPost.
This new dataset has the following advantages over the existing Twitter datasets:
Since news headlines are written by professionals in a formal manner, there are no spelling mistakes and informal usage. This reduces the sparsity and also increases the chance of finding pre-trained embeddings.
Furthermore, since the sole purpose of TheOnion is to publish sarcastic news, we get high-quality labels with much less noise as compared to Twitter datasets.
Unlike tweets which are replies to other tweets, the news headlines we obtained are self-contained. This would help us in teasing apart the real sarcastic elements. | 1,309 | [
[
-0.0179901123046875,
-0.050628662109375,
0.0166168212890625,
0.05010986328125,
-0.032470703125,
-0.01141357421875,
-0.0234222412109375,
-0.037841796875,
0.0406494140625,
0.0279541015625,
-0.0418701171875,
-0.054473876953125,
-0.026824951171875,
0.0166015625,... |
mwong/climate-evidence-related | 2022-10-25T10:06:54.000Z | [
"task_categories:text-classification",
"task_ids:fact-checking",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:extended|climate_fever",
"language:en",
"license:cc-by-sa-3.0",
"license:gpl-3.0",
... | mwong | null | null | 2 | 3 | 2022-04-12T10:58:49 | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language:
- en
license:
- cc-by-sa-3.0
- gpl-3.0
multilinguality:
- monolingual
paperswithcode_id: climate-fever
pretty_name: climate-fever
size_categories:
- 100K<n<1M
source_datasets:
- extended|climate_fever
task_categories:
- text-classification
task_ids:
- fact-checking
---
### Dataset Summary
This dataset is extracted from Climate Fever dataset (https://www.sustainablefinance.uzh.ch/en/research/climate-fever.html), pre-processed and ready to train and evaluate.
The training objective is a text classification task - given a claim and evidence, predict if evidence is related to claim. | 671 | [
[
-0.01043701171875,
-0.0264434814453125,
0.0128631591796875,
-0.0009813308715820312,
-0.014801025390625,
-0.006473541259765625,
-0.00823974609375,
-0.0294189453125,
0.00865936279296875,
0.0601806640625,
-0.037872314453125,
-0.0400390625,
-0.0574951171875,
0.0... |
mteb/cqadupstack-retrieval | 2022-04-12T17:28:40.000Z | [
"region:us"
] | mteb | null | null | 0 | 3 | 2022-04-12T17:20:07 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
rzhang123/US_Court_8_2 | 2022-04-18T00:38:58.000Z | [
"region:us"
] | rzhang123 | null | null | 0 | 3 | 2022-04-18T00:35:58 | Entry not found | 15 | [
[
-0.021392822265625,
-0.01494598388671875,
0.05718994140625,
0.028839111328125,
-0.0350341796875,
0.046539306640625,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.01702880859375,
-0.052093505859375,
-0.01494598388671875,
-0.06036376953125,
0.03790... |
bookbot/id_word2phoneme | 2023-03-20T10:00:22.000Z | [
"task_categories:text2text-generation",
"annotations_creators:no-annotation",
"language_creators:found",
"source_datasets:original",
"language:id",
"language:ms",
"region:us"
] | bookbot | null | null | 1 | 3 | 2022-04-20T07:37:29 | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- id
- ms
source_datasets:
- original
task_categories:
- text2text-generation
task_ids: []
pretty_name: ID Word2Phoneme
---
# Dataset Card for ID Word2Phoneme
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-splits)
- [Additional Information](#additional-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [Github](https://github.com/open-dict-data/ipa-dict/blob/master/data/ma.txt)
- **Repository:** [Github](https://github.com/open-dict-data/ipa-dict/blob/master/data/ma.txt)
- **Point of Contact:**
- **Size of downloaded dataset files:**
- **Size of the generated dataset:**
- **Total amount of disk used:**
### Dataset Summary
Originally a [Malay/Indonesian Lexicon](https://github.com/open-dict-data/ipa-dict/blob/master/data/ma.txt) retrieved from [ipa-dict](https://github.com/open-dict-data/ipa-dict). We removed the accented letters (because Indonesian graphemes do not use accents), separated homographs, and removed backslashes in phonemes -- resulting in a word-to-phoneme dataset.
### Languages
- Indonesian
- Malay
## Dataset Structure
### Data Instances
| word | phoneme |
| ----- | ------- |
| aba | aba |
| ab | ab |
| ab’ad | abʔad |
| abad | abad |
| abadi | abadi |
| ... | ... |
### Data Fields
- `word`: Word (grapheme) as a string.
- `phoneme`: Phoneme (IPA) as a string.
### Data Splits
| train |
| ----- |
| 27553 |
## Additional Information
### Citation Information
```
@misc{open-dict-data-no-date,
author = {{Open-Dict-Data}},
title = {{GitHub - open-dict-data/ipa-dict: Monolingual wordlists with pronunciation information in IPA}},
url = {https://github.com/open-dict-data/ipa-dict},
}
```
| 2,027 | [
[
-0.0237884521484375,
-0.027984619140625,
-0.000629425048828125,
0.013519287109375,
-0.033660888671875,
-0.00179290771484375,
-0.020965576171875,
-0.0185089111328125,
0.042816162109375,
0.021026611328125,
-0.0280303955078125,
-0.07476806640625,
-0.0267333984375,
... |
mwong/climatetext-climate_evidence-claim-related-evaluation | 2022-10-25T10:08:48.000Z | [
"task_categories:text-classification",
"task_ids:fact-checking",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:extended|climate_text",
"language:en",
"license:cc-by-sa-3.0",
"license:gpl-3.0",
"... | mwong | null | null | 1 | 3 | 2022-04-21T09:55:30 | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language:
- en
license:
- cc-by-sa-3.0
- gpl-3.0
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- extended|climate_text
task_categories:
- text-classification
task_ids:
- fact-checking
---
### Dataset Summary
This dataset is extracted from Climate Text dataset (https://www.sustainablefinance.uzh.ch/en/research/climate-fever/climatext.html), pre-processed and, ready to evaluate.
The evaluation objective is a text classification task - given a claim and climate related evidence, predict if claim is related to evidence. | 628 | [
[
-0.010955810546875,
-0.035675048828125,
0.0245819091796875,
0.00922393798828125,
-0.018402099609375,
-0.00860595703125,
-0.01360321044921875,
-0.024810791015625,
0.00276947021484375,
0.0650634765625,
-0.038909912109375,
-0.043182373046875,
-0.053955078125,
0... |
mwong/climatetext-claim-climate_evidence-related-evaluation | 2022-10-25T10:08:50.000Z | [
"task_categories:text-classification",
"task_ids:fact-checking",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:extended|climate_text",
"language:en",
"license:cc-by-sa-3.0",
"license:gpl-3.0",
"... | mwong | null | null | 1 | 3 | 2022-04-21T10:07:08 | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language:
- en
license:
- cc-by-sa-3.0
- gpl-3.0
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- extended|climate_text
task_categories:
- text-classification
task_ids:
- fact-checking
---
### Dataset Summary
This dataset is extracted from Climate Text dataset (https://www.sustainablefinance.uzh.ch/en/research/climate-fever/climatext.html), pre-processed and, ready to evaluate.
The evaluation objective is a text classification task - given a claim and climate related evidence, predict if evidence is related to claim. | 628 | [
[
-0.01068878173828125,
-0.0352783203125,
0.024627685546875,
0.00876617431640625,
-0.018157958984375,
-0.00897979736328125,
-0.0132293701171875,
-0.024688720703125,
0.002750396728515625,
0.06597900390625,
-0.038604736328125,
-0.043182373046875,
-0.054290771484375,... |
loretoparisi/tatoeba-sentences | 2022-04-27T17:26:31.000Z | [
"license:cc-by-2-0",
"region:us"
] | loretoparisi | null | null | 1 | 3 | 2022-04-22T08:48:18 | ---
license: cc-by-2-0
---
licenses:
- cc-by-2-0
multilinguality:
- multilingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- translation
task_ids: []
paperswithcode_id: tatoeba
pretty_name: Tatoeba
---
# Dataset Card for Tatoeba
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** http://opus.nlpl.eu/Tatoeba.php
- **Repository:** None
- **Paper:** http://www.lrec-conf.org/proceedings/lrec2012/pdf/463_Paper.pdf
- **Leaderboard:** [More Information Needed]
- **Point of Contact:** [More Information Needed]
### Dataset Summary
Tatoeba is a collection of sentences and translations.
To load a language pair which isn't part of the config, all you need to do is specify the language code as pairs.
You can find the valid pairs in Homepage section of Dataset Description: http://opus.nlpl.eu/Tatoeba.php
E.g.
`dataset = load_dataset("tatoeba", lang1="en", lang2="he")`
The default date is v2021-07-22, but you can also change the date with
`dataset = load_dataset("tatoeba", lang1="en", lang2="he", date="v2020-11-09")`
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
[More Information Needed]
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
[More Information Needed]
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[@loretoparisi](https://github.com/loretoparisi)
| 3,248 | [
[
-0.0250244140625,
-0.039154052734375,
0.0178070068359375,
0.03668212890625,
-0.02838134765625,
0.00815582275390625,
-0.038482666015625,
-0.036712646484375,
0.048919677734375,
0.03814697265625,
-0.043182373046875,
-0.07427978515625,
-0.047332763671875,
0.0300... |
bigscience-data/roots_zh_uncorpus | 2022-12-12T10:59:49.000Z | [
"language:zh",
"license:cc-by-4.0",
"region:us"
] | bigscience-data | null | null | 2 | 3 | 2022-04-22T10:33:31 | ---
language: zh
license: cc-by-4.0
extra_gated_prompt: 'By accessing this dataset, you agree to abide by the BigScience
Ethical Charter. The charter can be found at:
https://hf.co/spaces/bigscience/ethical-charter'
extra_gated_fields:
I have read and agree to abide by the BigScience Ethical Charter: checkbox
---
ROOTS Subset: roots_zh_uncorpus
# uncorpus
- Dataset uid: `uncorpus`
### Description
### Homepage
### Licensing
### Speaker Locations
### Sizes
- 2.8023 % of total
- 10.7390 % of ar
- 5.7970 % of fr
- 9.7477 % of es
- 2.0417 % of en
- 1.2540 % of zh
### BigScience processing steps
#### Filters applied to: ar
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: fr
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_1024
#### Filters applied to: es
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_1024
#### Filters applied to: en
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_1024
#### Filters applied to: zh
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_1024
| 1,146 | [
[
-0.039581298828125,
-0.0237274169921875,
0.030517578125,
0.0202789306640625,
-0.033172607421875,
-0.00722503662109375,
-0.0128936767578125,
0.01448822021484375,
0.0482177734375,
0.04632568359375,
-0.06024169921875,
-0.07171630859375,
-0.0394287109375,
0.0123... |
AndresPitta/sg-reports_labeled | 2022-10-25T10:08:57.000Z | [
"task_categories:text-classification",
"task_ids:multi-class-classification",
"annotations_creators:expert-generated",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:n<1K",
"source_datasets:original",
"language:en-US",
"license:unknown",
"region:us"
] | AndresPitta | null | null | 0 | 3 | 2022-04-22T14:52:01 | ---
annotations_creators:
- expert-generated
language_creators:
- machine-generated
language:
- en-US
license:
- unknown
multilinguality:
- monolingual
pretty_name: Gender language in the reports of the secretary general 2020-2021
size_categories:
- n<1K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- multi-class-classification
---
# Dataset Card for [Dataset Name]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact: Andrés Pitta: andres.pitta@un.org**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset. | 2,882 | [
[
-0.0330810546875,
-0.031585693359375,
0.01079559326171875,
0.016815185546875,
-0.0175933837890625,
0.0184173583984375,
-0.0239105224609375,
-0.0249481201171875,
0.0433349609375,
0.0435791015625,
-0.05816650390625,
-0.08197021484375,
-0.049285888671875,
0.007... |
Fhrozen/tau_srir_db | 2022-12-03T03:27:05.000Z | [
"task_categories:audio-classification",
"annotations_creators:unknown",
"language_creators:unknown",
"size_categories:n<1K",
"source_datasets:unknown",
"license:unknown",
"audio-slot-filling",
"region:us"
] | Fhrozen | null | null | 0 | 3 | 2022-04-25T02:54:54 | ---
annotations_creators:
- unknown
language_creators:
- unknown
license: unknown
size_categories:
- n<1K
source_datasets:
- unknown
task_categories:
- audio-classification
task_ids: []
tags:
- audio-slot-filling
---
# TAU Spatial Room Impulse Response Database (TAU-SRIR DB)
## Important
**This is a copy from the Zenodo Original one**
## Description
[Audio Research Group / Tampere University](https://webpages.tuni.fi/arg/)
AUTHORS
**Tampere University**
- Archontis Politis ([contact](mailto:archontis.politis@tuni.fi), [profile](https://scholar.google.fi/citations?user=DuCqB3sAAAAJ&hl=en))
- Sharath Adavanne ([contact](mailto:sharath.adavanne@tuni.fi), [profile](https://www.aane.in))
- Tuomas Virtanen ([contact](mailto:tuomas.virtanen@tuni.fi), [profile](https://homepages.tuni.fi/tuomas.virtanen/))
**Data Collection 2019-2020**
- Archontis Politis
- Aapo Hakala
- Ali Gohar
**Data Collection 2017-2018**
- Sharath Adavanne
- Aapo Hakala
- Eemi Fagerlund
- Aino Koskimies
The **TAU Spatial Room Impulse Response Database (TAU-SRIR DB)** database contains spatial room impulse responses (SRIRs) captured in various spaces of Tampere University (TAU), Finland, for a fixed receiver position and multiple source positions per room, along with separate recordings of spatial ambient noise captured at the same recording point. The dataset is intended for emulation of spatial multichannel recordings for evaluation and/or training of multichannel processing algorithms in realistic reverberant conditions and over multiple rooms. The major distinct properties of the database compared to other databases of room impulse responses are:
- Capturing in a high resolution multichannel format (32 channels) from which multiple more limited application-specific formats can be derived (e.g. tetrahedral array, circular array, first-order Ambisonics, higher-order Ambisonics, binaural).
- Extraction of densely spaced SRIRs along measurement trajectories, allowing emulation of moving source scenarios.
- Multiple source distances, azimuths, and elevations from the receiver per room, allowing emulation of complex configurations for multi-source methods.
- Multiple rooms, allowing evaluation of methods at various acoustic conditions, and training of methods with the aim of generalization on different rooms.
The RIRs were collected by staff of TAU between 12/2017 - 06/2018, and between 11/2019 - 1/2020. The data collection received funding from the European Research Council, grant agreement [637422 EVERYSOUND](https://cordis.europa.eu/project/id/637422).
[](https://erc.europa.eu/)
> **NOTE**: This database is a work-in-progress. We intend to publish additional rooms, additional formats, and potentially higher-fidelity versions of the captured responses in the near future, as new versions of the database in this repository.
## Report and reference
A compact description of the dataset, recording setup, recording procedure, and extraction can be found in:
>Politis., Archontis, Adavanne, Sharath, & Virtanen, Tuomas (2020). **A Dataset of Reverberant Spatial Sound Scenes with Moving Sources for Sound Event Localization and Detection**. In _Proceedings of the Detection and Classification of Acoustic Scenes and Events 2020 Workshop (DCASE2020)_, Tokyo, Japan.
available [here](https://dcase.community/documents/workshop2020/proceedings/DCASE2020Workshop_Politis_88.pdf). A more detailed report specifically focusing on the dataset collection and properties will follow.
## Aim
The dataset can be used for generating multichannel or monophonic mixtures for testing or training of methods under realistic reverberation conditions, related to e.g. multichannel speech enhancement, acoustic scene analysis, and machine listening, among others. It is especially suitable for the follow application scenarios:
- monophonic and multichannal reverberant single- or multi-source speech in multi-room reverberant conditions,
- monophonic and multichannel polyphonic sound events in multi-room reverberant conditions,
- single-source and multi-source localization in multi-room reverberant conditions, in static or dynamic scenarios,
- single-source and multi-source tracking in multi-room reverberant conditions, in static or dynamic scenarios,
- sound event localization and detection in multi-room reverberant conditions, in static or dynamic scenarios.
## Specifications
The SRIRs were captured using an [Eigenmike](https://mhacoustics.com/products) spherical microphone array. A [Genelec G Three loudspeaker](https://www.genelec.com/g-three) was used to playback a maximum length sequence (MLS) around the Eigenmike. The SRIRs were obtained in the STFT domain using a least-squares regression between the known measurement signal (MLS) and far-field recording independently at each frequency. In this version of the dataset the SRIRs and ambient noise are downsampled to 24kHz for compactness.
The currently published SRIR set was recorded at nine different indoor locations inside the Tampere University campus at Hervanta, Finland. Additionally, 30 minutes of ambient noise recordings were collected at the same locations with the IR recording setup unchanged. SRIR directions and distances differ with the room. Possible azimuths span the whole range of $\phi\in[-180,180)$, while the elevations span approximately a range between $\theta\in[-45,45]$ degrees. The currently shared measured spaces are as follows:
1. Large open space in underground bomb shelter, with plastic-coated floor and rock walls. Ventilation noise.
2. Large open gym space. Ambience of people using weights and gym equipment in adjacent rooms.
3. Small classroom (PB132) with group work tables and carpet flooring. Ventilation noise.
4. Meeting room (PC226) with hard floor and partially glass walls. Ventilation noise.
5. Lecture hall (SA203) with inclined floor and rows of desks. Ventilation noise.
6. Small classroom (SC203) with group work tables and carpet flooring. Ventilation noise.
7. Large classroom (SE203) with hard floor and rows of desks. Ventilation noise.
8. Lecture hall (TB103) with inclined floor and rows of desks. Ventilation noise.
9. Meeting room (TC352) with hard floor and partially glass walls. Ventilation noise.
The measurement trajectories were organized in groups, with each group being specified by a circular or linear trace at the floor at a certain distance (range) from the z-axis of the microphone. For circular trajectories two ranges were measured, a _close_ and a _far_ one, except room TC352, where the same range was measured twice, but with different furniture configuration and open or closed doors. For linear trajectories also two ranges were measured, _close_ and _far_, but with linear paths at either side of the array, resulting in 4 unique trajectory groups, with the exception of room SA203 where 3 ranges were measurd resulting on 6 trajectory groups. Linear trajectory groups are always parallel to each other, in the same room.
Each trajectory group had multiple measurement trajectories, following the same floor path, but with the source at different heights.
The SRIRs are extracted from the noise recordings of the slowly moving source across those trajectories, at an angular spacing of approximately every 1 degree from the microphone. This extraction scheme instead of extracting SRIRs at equally spaced points along the path (e.g. every 20cm) was found more practical for synthesis purposes, making emulation of moving sources at an approximately constant angular speed easier.
The following table summarizes the above properties for the currently available rooms:
| | Room name | Room type | Traj. type | # ranges | # trajectory groups | # heights/group | # trajectories (total) | # RIRs/DOAs |
|---|--------------------------|----------------------------|------------|-------------|-----------------------|---------------------|------------------------|-------------|
| 1 | Bomb shelter | Complex/semi-open | Circular | 2 | 2 | 9 | 18 | 6480 |
| 2 | Gym | Rectangular/large | Circular | 2 | 2 | 9 | 18 | 6480 |
| 3 | PB132 Meeting room | Rectangular/small | Circular | 2 | 2 | 9 | 18 | 6480 |
| 4 | PC226 Meeting room | Rectangular/small | Circular | 2 | 2 | 9 | 18 | 6480 |
| 5 | SA203 Lecture hall | Trapezoidal/large | Linear | 3 | 6 | 3 | 18 | 1594 |
| 6 | SC203 Classroom | Rectangular/medium | Linear | 2 | 4 | 5 | 20 | 1592 |
| 7 | SE203 Classroom | Rectangular/large | Linear | 2 | 4 | 4 | 16 | 1760 |
| 8 | TB103 Classroom | Trapezoidal/large | Linear | 2 | 4 | 3 | 12 | 1184 |
| 9 | TC352 Meeting room | Rectangular/small | Circular | 1 | 2 | 9 | 18 | 6480 |
More details on the trajectory geometries can be found in the database info file (`measinfo.mat`).
## Recording formats
The array response of the two recording formats can be considered known. The following theoretical spatial responses (steering vectors) modeling the two formats describe the directional response of each channel to a source incident from direction-of-arrival (DOA) given by azimuth angle $\phi$ and elevation angle $\theta$.
**For the first-order ambisonics (FOA):**
\begin{eqnarray}
H_1(\phi, \theta, f) &=& 1 \\
H_2(\phi, \theta, f) &=& \sin(\phi) * \cos(\theta) \\
H_3(\phi, \theta, f) &=& \sin(\theta) \\
H_4(\phi, \theta, f) &=& \cos(\phi) * \cos(\theta)
\end{eqnarray}
The (FOA) format is obtained by converting the 32-channel microphone array signals by means of encoding filters based on anechoic measurements of the Eigenmike array response. Note that in the formulas above the encoding format is assumed frequency-independent, something that holds true up to around 9kHz with the specific microphone array, while the actual encoded responses start to deviate gradually at higher frequencies from the ideal ones provided above. Routines that can compute the matrix of encoding filters for spherical and general arrays, based on theoretical array models or measurements, can be found [here](https://github.com/polarch/Spherical-Array-Processing).
**For the tetrahedral microphone array (MIC):**
The four microphone have the following positions, in spherical coordinates $(\phi, \theta, r)$:
\begin{eqnarray}
M1: &\quad(&45^\circ, &&35^\circ, &4.2\mathrm{cm})\nonumber\\
M2: &\quad(&-45^\circ, &-&35^\circ, &4.2\mathrm{cm})\nonumber\\
M3: &\quad(&135^\circ, &-&35^\circ, &4.2\mathrm{cm})\nonumber\\
M4: &\quad(&-135^\circ, &&35^\circ, &4.2\mathrm{cm})\nonumber
\end{eqnarray}
Since the microphones are mounted on an acoustically-hard spherical baffle, an analytical expression for the directional array response is given by the expansion:
\begin{equation}
H_m(\phi_m, \theta_m, \phi, \theta, \omega) = \frac{1}{(\omega R/c)^2}\sum_{n=0}^{30} \frac{i^{n-1}}{h_n'^{(2)}(\omega R/c)}(2n+1)P_n(\cos(\gamma_m))
\end{equation}
where $m$ is the channel number, $(\phi_m, \theta_m)$ are the specific microphone's azimuth and elevation position, $\omega = 2\pi f$ is the angular frequency, $R = 0.042$m is the array radius, $c = 343$m/s is the speed of sound, $\cos(\gamma_m)$ is the cosine angle between the microphone and the DOA, and $P_n$ is the unnormalized Legendre polynomial of degree $n$, and $h_n'^{(2)}$ is the derivative with respect to the argument of a spherical Hankel function of the second kind. The expansion is limited to 30 terms which provides negligible modeling error up to 20kHz. Example routines that can generate directional frequency and impulse array responses based on the above formula can be found [here](https://github.com/polarch/Array-Response-Simulator).
## Reference directions-of-arrival
For each extracted RIR across a measurement trajectory there is a direction-of-arrival (DOA) associated with it, which can be used as the reference direction for sound source spatialized using this RIR, for training or evaluation purposes. The DOAs were determined acoustically from the extracted RIRs, by windowing the direct sound part and applying a broadband version of the MUSIC localization algorithm on the windowed multichannel signal.
The DOAs are provided as Cartesian components [x, y, z] of unit length vectors.
## Scene generator
A set of routines is shared, here termed scene generator, that can spatialize a bank of sound samples using the SRIRs and noise recordings of this library, to emulate scenes for the two target formats. The code is the same as the one used to generate the [**TAU-NIGENS Spatial Sound Events 2021**](https://doi.org/10.5281/zenodo.5476980) dataset, and has been ported to Python from the original version written in Matlab.
The generator can be found [**here**](https://github.com/danielkrause/DCASE2022-data-generator), along with more details on its use.
The generator at the moment is set to work with the [NIGENS](https://zenodo.org/record/2535878) sound event sample database, and the [FSD50K](https://zenodo.org/record/4060432) sound event database, but additional sample banks can be added with small modifications.
The dataset together with the generator has been used by the authors in the following public challenges:
- [DCASE 2019 Challenge Task 3](https://dcase.community/challenge2019/task-sound-event-localization-and-detection), to generate the **TAU Spatial Sound Events 2019** dataset ([development](https://doi.org/10.5281/zenodo.2599196)/[evaluation](https://doi.org/10.5281/zenodo.3377088))
- [DCASE 2020 Challenge Task 3](https://dcase.community/challenge2020/task-sound-event-localization-and-detection), to generate the [**TAU-NIGENS Spatial Sound Events 2020**](https://doi.org/10.5281/zenodo.4064792) dataset
- [DCASE2021 Challenge Task 3](https://dcase.community/challenge2021/task-sound-event-localization-and-detection), to generate the [**TAU-NIGENS Spatial Sound Events 2021**](https://doi.org/10.5281/zenodo.5476980) dataset
- [DCASE2022 Challenge Task 3](https://dcase.community/challenge2022/task-sound-event-localization-and-detection), to generate additional [SELD synthetic mixtures for training the task baseline](https://doi.org/10.5281/zenodo.6406873)
> **NOTE**: The current version of the generator is work-in-progress, with some code being quite "rough". If something does not work as intended or it is not clear what certain parts do, please contact [daniel.krause@tuni.fi](mailto:daniel.krause@tuni.fi), or [archontis.politis@tuni.fi](mailto:archontis.politis@tuni.fi).
## Dataset structure
The dataset contains a folder of the SRIRs (`TAU-SRIR_DB`), with all the SRIRs per room in a single _mat_ file, e.g. `rirs_09_tb103.mat`. The specific room had 4 trajectory groups measured at 3 different heights, hence the mat file contains an `rirs` array of 4x3 structures, each with the fields `mic` and `foa`. Selecting e.g. the 2nd trajectory and 3rd height with `rirs(2,3)` returns `mic` and `foa` fields with an array of size `[7200x4x114]` on each. The array contains the SRIRs for the specific format, and it is arranged as `[samples x channels x DOAs]`, meaning that 300msec long (7200samples@24kHz) 4 channel RIRs are extracted at 114 positions along that specific trajectory.
The file `rirdata.mat` contains some general information such as sample rate, format specifications, and most importantly the DOAs of every extracted SRIR. Those can be found in the `rirdata.room` field, which is an array of 9 structures itself, one per room. Checking for example `rirdata.room(8)` returns the name of the specific room (_tb103_), the year the measurements were done, the numbers of SRIRs extracted for each trajectory, and finally the DOAs of the extracted SRIRs. The DOAs of a certain trajectory can be retrieved as e.g. `rirdata.room(8).rirs(2,3).doa_xyz` which returns an array of size `[114x3]`. These are the DOAs of the 114 SRIRs retrieved in the previous step for the 2nd trajectory, 3rd source height, of room `TB103`.
The file `measinfo.mat` contains measurement and recording information in each room. Those details are the name of each room, its dimensions for rectangular or trapezoidal shapes, start and end positions for the linear trajectories, or distances from center for the circular ones, the source heights for each trajectory group, the target formats, the trajectory type, the recording device, the A-weighted ambient sound pressure level, and the maximum and minimum A-weighted sound pressure level of the measurement noise signal. Coordinates are defined with respect to the origina being at the base of the microphone. Based on the information included in the `measinfo.mat`, one can plot a 3D arrangement of the trajectories around the microphone, even though keep in mind that these would be the ideal circular or linear intended trajectories, while the actual DOAs obtained from acoustic analysis have some deviations around those ideal paths.
Finally, the dataset contains a folder of spatial ambient noise recordings (`TAU-SNoise_DB`), with one subfolder per room having two audio recordings fo the spatial ambience, one for each format, FOA or MIC. The recordings vary in length between rooms, ranging from about 20 mins to 30 mins. Users of the dataset can segment these recordings and add them to spatialized sound samples at desired SNRs, or mix different segments to augment the recordings to additional ambience than the original recording time. Such a use case is demonstrated in the scene generator examples.
## Download
The files `TAU-SRIR_DB.z01`, ..., `TAU-SRIR_DB.zip` contain the SRIRs and measurement info files.
The files `TAU-SNoise_DB.z01`, ..., `TAU-SNoise_DB.zip` contain the ambient noise recordings.
Download the zip files and use your preferred compression tool to unzip these split zip files. To extract a split zip archive (named as zip, z01, z02, ...), you could use, for example, the following syntax in Linux or OSX terminal:
Combine the split archive to a single archive:
>zip -s 0 split.zip --out single.zip
Extract the single archive using unzip:
>unzip single.zip
# License
The database is published under a custom **open non-commercial with attribution** license. It can be found in the `LICENSE.txt` file that accompanies the data.
| 19,341 | [
[
-0.05987548828125,
-0.049468994140625,
0.01337432861328125,
0.00907135009765625,
0.000759124755859375,
-0.02313232421875,
-0.020111083984375,
-0.02337646484375,
0.022857666015625,
0.005611419677734375,
-0.053619384765625,
-0.042083740234375,
0.000299215316772460... |
TheBritishLibrary/web_archive_classification | 2023-05-04T12:59:29.000Z | [
"task_categories:text-classification",
"task_ids:multi-class-classification",
"task_ids:multi-label-classification",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
... | TheBritishLibrary | The dataset comprises a manually curated selective archive produced by UKWA which includes the classification of sites into a two-tiered subject hierarchy. | TODO | 2 | 3 | 2022-04-25T10:14:45 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- other
multilinguality:
- monolingual
pretty_name: UK Selective Web Archive Classification Dataset
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- multi-class-classification
- multi-label-classification
tags:
- lam
---
# Dataset Card for UK Selective Web Archive Classification Dataset
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [Needs More Information]
- **Repository:** [Needs More Information]
- **Paper:** [Needs More Information]
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Needs More Information]
### Dataset Summary
The dataset comprises a manually curated selective archive produced by UKWA which includes the classification of sites into a two-tiered subject hierarchy. In partnership with the Internet Archive and JISC, UKWA had obtained access to the subset of the Internet Archives web collection that relates to the UK. The JISC UK Web Domain Dataset (1996 - 2013) contains all of the resources from the Internet Archive that were hosted on domains ending in .uk, or that are required in order to render those UK pages. UKWA have made this manually-generated classification information available as an open dataset in Tab Separated Values (TSV) format. UKWA is particularly interested in whether high-level metadata like this can be used to train an appropriate automatic classification system so that this manually generated dataset may be used to partially automate the categorisation of the UKWAs larger archives. UKWA expects that an appropriate classifier might require more information about each site in order to produce reliable results, and a future goal is to augment this dataset with further information. Options include: for each site, making the titles of every page on that site available, and for each site, extract a set of keywords that summarise the site, via the full-text index. For more information: http://data.webarchive.org.uk/opendata/ukwa.ds.1/classification/
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
[Needs More Information]
## Dataset Structure
### Data Instances
[Needs More Information]
### Data Fields
[Needs More Information]
### Data Splits
[Needs More Information]
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
Creative Commons Public Domain Mark 1.0.
### Citation Information
[Needs More Information] | 4,183 | [
[
-0.039337158203125,
0.006893157958984375,
-0.007598876953125,
-0.0020084381103515625,
-0.025115966796875,
0.008819580078125,
-0.012054443359375,
-0.034393310546875,
0.0204620361328125,
0.040618896484375,
-0.049163818359375,
-0.0601806640625,
-0.048370361328125,
... |
loubnabnl/tokenized-github-code-python | 2022-04-28T00:13:55.000Z | [
"region:us"
] | loubnabnl | null | null | 0 | 3 | 2022-04-25T12:34:38 | # Pretokenized GitHub Code Dataset
## Dataset Description
This is a pretokenized version of the Python files of the [GitHub Code dataset](https://huggingface.co/datasets/lvwerra/github-code), that consists of 115M code files from GitHub in 32 programming languages. We tokenized the dataset using BPE Tokenizer trained on code, available in this [repo](https://huggingface.co/lvwerra/codeparrot). Having a pretokenized dataset can speed up the training loop by not having to tokenize data at each batch call. We also include `ratio_char_token` which gives the ratio between the number of characters in a file and the number of tokens we get after tokenization, this ratio can be a good filter to detect outlier files.
### How to use it
To avoid downloading the whole dataset, you can make use of the streaming API of `datasets`. You can load and iterate through the dataset with the following two lines of code:
```python
from datasets import load_dataset
ds = load_dataset("loubnabnl/tokenized-github-code-python", streaming=True, split="train")
print(next(iter(ds)))
#OUTPUT:
{'input_ids': [504, 1639, 492,...,199, 504, 1639],
'ratio_char_token': 3.560888252148997
}
``` | 1,177 | [
[
-0.032562255859375,
-0.0125885009765625,
0.009307861328125,
0.0162811279296875,
-0.0411376953125,
0.007007598876953125,
-0.0225677490234375,
0.0012903213500976562,
0.037841796875,
0.040283203125,
-0.0266876220703125,
-0.043487548828125,
-0.04876708984375,
0.... |
pietrolesci/copa_nli | 2022-04-25T13:47:10.000Z | [
"region:us"
] | pietrolesci | null | null | 0 | 3 | 2022-04-25T13:46:42 | ## Overview
Original dataset available [here](https://people.ict.usc.edu/~gordon/copa.html).
Current dataset extracted from [this repo](https://github.com/felipessalvatore/NLI_datasets).
This is the "full" dataset.
# Curation
Same curation as the one applied in [this repo](https://github.com/felipessalvatore/NLI_datasets), that is
from the original COPA format:
|premise | choice1 | choice2 | label |
|---|---|---|---|
|My body cast a shadow over the grass | The sun was rising | The grass was cut | 0 |
to the NLI format:
| premise | hypothesis | label |
|---|---|---|
| My body cast a shadow over the grass | The sun was rising| entailment |
| My body cast a shadow over the grass | The grass was cut | not_entailment |
Also, the labels are encoded with the following mapping `{"not_entailment": 0, "entailment": 1}`
## Code to generate dataset
```python
import pandas as pd
from datasets import Features, Value, ClassLabel, Dataset, DatasetDict, load_dataset
from pathlib import Path
# read data
path = Path("./nli_datasets")
datasets = {}
for dataset_path in path.iterdir():
datasets[dataset_path.name] = {}
for name in dataset_path.iterdir():
df = pd.read_csv(name)
datasets[dataset_path.name][name.name.split(".")[0]] = df
# merge all splits
df = pd.concat(list(datasets["copa"].values()))
# encode labels
df["label"] = df["label"].map({"not_entailment": 0, "entailment": 1})
# cast to dataset
features = Features({
"premise": Value(dtype="string", id=None),
"hypothesis": Value(dtype="string", id=None),
"label": ClassLabel(num_classes=2, names=["not_entailment", "entailment"]),
})
ds = Dataset.from_pandas(df, features=features)
ds.push_to_hub("copa_nli", token="<token>")
``` | 1,871 | [
[
-0.0164947509765625,
-0.03338623046875,
0.0168914794921875,
0.0275115966796875,
-0.0162200927734375,
0.0014162063598632812,
-0.0088653564453125,
-0.0248260498046875,
0.051513671875,
0.05438232421875,
-0.03631591796875,
-0.06268310546875,
-0.0350341796875,
0.... |
SocialGrep/the-reddit-nft-dataset | 2022-07-01T17:52:49.000Z | [
"annotations_creators:lexyr",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"region:us"
] | SocialGrep | A comprehensive dataset of Reddit's NFT discussion. | null | 1 | 3 | 2022-04-26T19:52:29 | ---
annotations_creators:
- lexyr
language_creators:
- crowdsourced
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 1M<n<10M
source_datasets:
- original
paperswithcode_id: null
---
# Dataset Card for the-reddit-nft-dataset
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
## Dataset Description
- **Homepage:** [https://socialgrep.com/datasets](https://socialgrep.com/datasets/the-reddit-nft-dataset?utm_source=huggingface&utm_medium=link&utm_campaign=theredditnftdataset)
- **Point of Contact:** [Website](https://socialgrep.com/contact?utm_source=huggingface&utm_medium=link&utm_campaign=theredditnftdataset)
### Dataset Summary
A comprehensive dataset of Reddit's NFT discussion.
### Languages
Mainly English.
## Dataset Structure
### Data Instances
A data point is a post or a comment. Due to the separate nature of the two, those exist in two different files - even though many fields are shared.
### Data Fields
- 'type': the type of the data point. Can be 'post' or 'comment'.
- 'id': the base-36 Reddit ID of the data point. Unique when combined with type.
- 'subreddit.id': the base-36 Reddit ID of the data point's host subreddit. Unique.
- 'subreddit.name': the human-readable name of the data point's host subreddit.
- 'subreddit.nsfw': a boolean marking the data point's host subreddit as NSFW or not.
- 'created_utc': a UTC timestamp for the data point.
- 'permalink': a reference link to the data point on Reddit.
- 'score': score of the data point on Reddit.
- 'domain': (Post only) the domain of the data point's link.
- 'url': (Post only) the destination of the data point's link, if any.
- 'selftext': (Post only) the self-text of the data point, if any.
- 'title': (Post only) the title of the post data point.
- 'body': (Comment only) the body of the comment data point.
- 'sentiment': (Comment only) the result of an in-house sentiment analysis pipeline. Used for exploratory analysis.
## Additional Information
### Licensing Information
CC-BY v4.0
| 2,902 | [
[
-0.054046630859375,
-0.06829833984375,
0.0211334228515625,
0.037628173828125,
-0.0386962890625,
0.0168914794921875,
-0.00820159912109375,
-0.0264434814453125,
0.0609130859375,
0.03131103515625,
-0.080322265625,
-0.06451416015625,
-0.046356201171875,
0.028381... |
Calin/eurosat-demo | 2022-04-27T09:26:44.000Z | [
"region:us"
] | Calin | null | null | 0 | 3 | 2022-04-27T09:26:24 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
strombergnlp/danfever | 2022-10-25T21:42:40.000Z | [
"task_categories:text-classification",
"task_ids:fact-checking",
"task_ids:natural-language-inference",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:da",
"license:cc-by-4.0",
"... | strombergnlp | \ | @inproceedings{norregaard-derczynski-2021-danfever,
title = "{D}an{FEVER}: claim verification dataset for {D}anish",
author = "N{\o}rregaard, Jeppe and
Derczynski, Leon",
booktitle = "Proceedings of the 23rd Nordic Conference on Computational Linguistics (NoDaLiDa)",
month = may # " 31--2 " # jun,
year = "2021",
address = "Reykjavik, Iceland (Online)",
publisher = {Link{\"o}ping University Electronic Press, Sweden},
url = "https://aclanthology.org/2021.nodalida-main.47",
pages = "422--428",
abstract = "We present a dataset, DanFEVER, intended for multilingual misinformation research. The dataset is in Danish and has the same format as the well-known English FEVER dataset. It can be used for testing methods in multilingual settings, as well as for creating models in production for the Danish language.",
} | 2 | 3 | 2022-04-28T09:17:29 | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- da
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- fact-checking
- natural-language-inference
paperswithcode_id: danfever
pretty_name: DanFEVER
tags:
- knowledge-verification
---
# Dataset Card for DanFEVER
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [https://github.com/StrombergNLP/danfever](https://github.com/StrombergNLP/danfever)
- **Repository:** [https://stromberg.ai/publication/danfever/](https://stromberg.ai/publication/danfever/)
- **Paper:** [https://aclanthology.org/2021.nodalida-main.47/](https://aclanthology.org/2021.nodalida-main.47/)
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Leon Derczynski](mailto:leod@itu.dk)
- **Size of downloaded dataset files:** 2.82 MiB
- **Size of the generated dataset:** 2.80 MiB
- **Total amount of disk used:** 5.62 MiB
### Dataset Summary
We present a dataset, DanFEVER, intended for multilingual misinformation research. The dataset is in Danish and has the same format as the well-known English FEVER dataset. It can be used for testing methods in multilingual settings, as well as for creating models in production for the Danish language.
### Supported Tasks and Leaderboards
This dataset supports the FEVER task, but in Danish.
* PwC leaderboard: [Fact Verification on DanFEVER](https://paperswithcode.com/sota/fact-verification-on-danfever)
### Languages
This dataset is in Danish; the bcp47 is `da_DK`.
## Dataset Structure
### Data Instances
```
{
'id': '0',
'claim': 'Den 31. oktober 1920 opdagede Walter Baade kometen (944) Hidalgo i det ydre solsystem.',
'label': 0,
'evidence_extract': '(944) Hidalgo (oprindeligt midlertidigt navn: 1920 HZ) er en mørk småplanet med en diameter på ca. 50 km, der befinder sig i det ydre solsystem. Objektet blev opdaget den 31. oktober 1920 af Walter Baade. En asteroide (småplanet, planetoide) er et fast himmellegeme, hvis bane går rundt om Solen (eller en anden stjerne). Pr. 5. maj 2017 kendes mere end 729.626 asteroider og de fleste befinder sig i asteroidebæltet mellem Mars og Jupiter.',
'verifiable': 1,
'evidence': 'wiki_26366, wiki_12289',
'original_id': '1'
}
```
### Data Fields
[Needs More Information]
### Data Splits
[Needs More Information]
## Dataset Creation
### Curation Rationale
A dump of the Danish Wikipedia of 13 February 2020 was stored as well as the relevant articles from Den Store Danske (excerpts only, to comply with copyright laws). Two teams of two people independently sampled evidence, and created and annotated claims from these two sites.
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
The source language is from Wikipedia contributors editors and from dictionary contributors and editors.
### Annotations
#### Annotation process
Detailed in [this paper](http://www.derczynski.com/papers/danfever.pdf).
#### Who are the annotators?
The annotators are native Danish speakers and masters students of IT; two female, two male, ages 25-35.
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
The purpose of this dataset is to enable construction of fact-checking systems in Danish. A system that succeeds at this may be able to identify questionable conclusions or inferences.
### Discussion of Biases
The data is drawn from relatively formal topics, and so may perform poorly outside these areas.
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
The data here is licensed CC-BY 4.0. If you use this data, you MUST state its origin.
### Citation Information
Refer to this work as:
> Nørregaard and Derczynski (2021). "DanFEVER: claim verification dataset for Danish", Proceedings of the 23rd Nordic Conference on Computational Linguistics (NoDaLiDa).
Bibliographic reference:
````
@inproceedings{norregaard-derczynski-2021-danfever,
title = "{D}an{FEVER}: claim verification dataset for {D}anish",
author = "N{\o}rregaard, Jeppe and Derczynski, Leon",
booktitle = "Proceedings of the 23rd Nordic Conference on Computational Linguistics (NoDaLiDa)",
year = "2021",
publisher = {Link{\"o}ping University Electronic Press, Sweden},
url = "https://aclanthology.org/2021.nodalida-main.47",
pages = "422--428"
}
```
| 5,630 | [
[
-0.0416259765625,
-0.037261962890625,
0.02020263671875,
0.00704193115234375,
-0.0244293212890625,
-0.0108489990234375,
-0.022064208984375,
-0.032806396484375,
0.047607421875,
0.021453857421875,
-0.03668212890625,
-0.0714111328125,
-0.05181884765625,
0.042877... |
strombergnlp/polstance | 2022-10-25T21:42:18.000Z | [
"task_categories:text-classification",
"task_ids:sentiment-analysis",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:n<1K",
"source_datasets:original",
"language:da",
"license:cc-by-4.0",
"stance-detection",
"region:us"
] | strombergnlp | Political stance in Danish. Examples represent statements by
politicians and are annotated for, against, or neutral to a given topic/article. | @inproceedings{lehmann2019political,
title={Political Stance in Danish},
author={Lehmann, Rasmus and Derczynski, Leon},
booktitle={Proceedings of the 22nd Nordic Conference on Computational Linguistics},
pages={197--207},
year={2019}
} | 1 | 3 | 2022-04-28T10:08:13 | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- da
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- n<1K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- sentiment-analysis
paperswithcode_id: polstance
pretty_name: Political Stance for Danish
tags:
- stance-detection
---
# Dataset Card for "polstance"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://stromberg.ai/publication/politicalstanceindanish/](https://stromberg.ai/publication/politicalstanceindanish/)
- **Repository:** [https://github.com/StrombergNLP/Political-Stance-in-Danish/](https://github.com/StrombergNLP/Political-Stance-in-Danish/)
- **Paper:** [https://aclanthology.org/W19-6121/](https://aclanthology.org/W19-6121/)
- **Point of Contact:** [Leon Derczynski](https://github.com/leondz)
- **Size of downloaded dataset files:** 548 KB
- **Size of the generated dataset:** 222 KB
- **Total amount of disk used:** 770 KB
### Dataset Summary
Political stance in Danish. Examples represent statements by
politicians and are annotated for, against, or neutral to a given topic/article.
### Supported Tasks and Leaderboards
*
### Languages
Danish, bcp47: `da-DK`
## Dataset Structure
### Data Instances
#### polstance
An example of 'train' looks as follows.
```
{
'id': '0',
'topic': 'integration',
'quote': 'Der kunne jeg godt tænke mig, at der stod mere eksplicit, at de (landene, red.) skal bekæmpe menneskesmuglere og tage imod deres egne borgere',
'label': 2,
'quoteID': '516',
'party': 'Det Konservative Folkeparti',
'politician': 'Naser Khader',
}
```
### Data Fields
- `id`: a `string` feature.
- `topic`: a `string` expressing a topic.
- `quote`: a `string` to be classified for its stance to the topic.
- `label`: a class label representing the stance the text expresses towards the target. Full tagset with indices:
```
0: "against",
1: "neutral",
2: "for",
```
- `quoteID`: a `string` of the internal quote ID.
- `party`: a `string` describing the party affiliation of the quote utterer at the time of utterance.
- `politician`: a `string` naming the politician who uttered the quote.
### Data Splits
| name |train|
|---------|----:|
|polstance|900 sentences|
## Dataset Creation
### Curation Rationale
Collection of quotes from politicians to allow detecting how political quotes orient to issues.
### Source Data
#### Initial Data Collection and Normalization
The data is taken from proceedings of the Danish parliament, the Folketing - [ft.dk](https://ft.dk).
#### Who are the source language producers?
Danish polticians
### Annotations
#### Annotation process
Annotators labelled comments for being against, neutral, or for a specified topic
#### Who are the annotators?
Danish native speakers, 20s, male, studying Software Design.
### Personal and Sensitive Information
The data was public at the time of collection and will remain open public record by law in Denmark.
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
The above limitations apply.
## Additional Information
### Dataset Curators
The dataset is curated by the paper's authors.
### Licensing Information
The authors distribute this data under Creative Commons attribution license, CC-BY 4.0.
### Citation Information
```
@inproceedings{lehmann2019political,
title={Political Stance in Danish},
author={Lehmann, Rasmus and Derczynski, Leon},
booktitle={Proceedings of the 22nd Nordic Conference on Computational Linguistics},
pages={197--207},
year={2019}
}
```
### Contributions
Author-added dataset [@leondz](https://github.com/leondz)
| 4,867 | [
[
-0.04736328125,
-0.040130615234375,
0.020233154296875,
0.00821685791015625,
-0.0396728515625,
0.0080108642578125,
-0.04376220703125,
0.0015630722045898438,
0.04180908203125,
0.03460693359375,
-0.031982421875,
-0.0782470703125,
-0.0555419921875,
0.00661468505... |
nielsr/funsd-image-feature | 2022-04-29T09:44:07.000Z | [
"region:us"
] | nielsr | null | null | 0 | 3 | 2022-04-29T09:43:58 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
ntt123/viet-tts-dataset | 2022-05-06T09:03:02.000Z | [
"license:cc-by-nc-4.0",
"region:us"
] | ntt123 | null | null | 4 | 3 | 2022-05-06T03:40:14 | ---
license: cc-by-nc-4.0
---
# Vietnamese Text-To-Speech dataset (VietTTS-v1.1)
🔔🔔🔔 visit https://github.com/NTT123/vietTTS for a vietnamese TTS library (included pretrained models). 🔔🔔🔔
The text is from a collection of novels and short stories from the author "Vu Trong Phung." The text is in public domain.
The audio is generated by Google Text-to-Speech offline engine on Android. The audio is NOT for commercial use.
Dataset size: `5.4G`.
Total audio duration: `35.9 hours`.
### Text-audio samples
- Sample 1:
+ Audio: [file1](https://huggingface.co/datasets/ntt123/viet-tts-dataset/blob/main/000000.wav)
+ Text: `"Ai" đây tức là một kẻ ăn mày vậy. Anh ta chưa kịp quay đi thì đã thấy mấy con chó vàng chạy xồng xộc ra cứ nhảy xổ vào chân anh.`
- Sample 2:
+ Audio: [file2](https://huggingface.co/datasets/ntt123/viet-tts-dataset/blob/main/022878.wav)
+ Text: `Ừ, thế mày đã nuôi được bố mẹ mày bữa nào chưa, hay xưa nay vẫn báo hại cơm cha áo mẹ mãi? Mấy hôm thấy ông đơ mặt không thèm nói, mày lại làm già à?`
### Download
Get the dataset from here: [link](https://huggingface.co/datasets/ntt123/viet-tts-dataset/blob/main/viet-tts.tar.gz).
Or, run the following commands:
```
wget https://huggingface.co/datasets/ntt123/viet-tts-dataset/resolve/main/viet-tts.tar.gz -O viet-tts.tar.gz
mkdir -p dataset
tar -C dataset -xzf viet-tts.tar.gz
```
`dataset` directory structure:
```
dataset
├── collections.txt
├── meta_data.tsv
└── wav
├── 000000.wav
├── 000001.wav
├── 000002.wav
├── 000003.wav
...
```
### Statistics
- Number of clips: 22884 clips.
- Shortest audio clip: 0.46 seconds.
- Median clip duration: 5.46 seconds.
- Mean clip duration: 5.65 seconds.
- Longest audio clip: 15.4 seconds.
### Vũ Trọng Phụng's collections
- Bệnh Lao Chữa Bằng Mồm Hay Là ... Thầy Lang Bất Hủ, 1934?
- Cạm Bẫy Người, 1933.
- Cơm Thầy Cơm Cô, 1936.
- Đời Là Một Cuộc Chiến Đấu,1939.
- Dứt Tình, 1934.
- Giông Tố, 1936.
- Gương Tống Tiền, N/A.
- Hồ Sê Líu, Hồ Líu Sê Sàng, 1936.
- Kỹ Nghệ Lấy Tây, 1934.
- Làm Đĩ, 1936.
- Lấy Nhau Vì Tình, 1937.
- Lấy Vợ Xấu, 1937.
- Lòng Tự Ái, 1937.
- Máu Mê, 1937.
- Một Cái Chết, 1931.
- Một Con Chó Hay Chim Chuột, 1937.
- Một Đồng Bạc, 1939.
- Người Có Quyền, 1937.
- Sao Mày Không Vỡ Nắp Ơi!, 1934.
- Số Đỏ, 1936.
- Sư Cụ Triết Lý, 1935.
- Trúng Số Độc Đắc, 1938.
- Tự Do, 1937.
- Từ Lý Thuyết Đến Thực Hành, N/A.
- Vỡ Đê, 1936.
| 2,428 | [
[
-0.0189971923828125,
-0.04193115234375,
0.026885986328125,
0.032257080078125,
-0.045318603515625,
0.005115509033203125,
-0.01708984375,
-0.036651611328125,
0.052734375,
0.04052734375,
-0.0526123046875,
-0.05340576171875,
-0.038177490234375,
0.012542724609375... |
Fhrozen/AudioSet2K22 | 2023-05-07T23:50:56.000Z | [
"task_categories:audio-classification",
"annotations_creators:unknown",
"language_creators:unknown",
"size_categories:100K<n<100M",
"source_datasets:unknown",
"license:cc-by-sa-4.0",
"audio-slot-filling",
"region:us"
] | Fhrozen | null | null | 4 | 3 | 2022-05-09T12:42:09 | ---
annotations_creators:
- unknown
language_creators:
- unknown
license: cc-by-sa-4.0
size_categories:
- 100K<n<100M
source_datasets:
- unknown
task_categories:
- audio-classification
task_ids: []
tags:
- audio-slot-filling
---
# Dataset Card for audioset2022
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [AudioSet Ontology](https://research.google.com/audioset/ontology/index.html)
- **Repository:** [Needs More Information]
- **Paper:** [Audio Set: An ontology and human-labeled dataset for audio events](https://research.google.com/pubs/pub45857.html)
- **Leaderboard:** [Paperswithcode Leaderboard](https://paperswithcode.com/dataset/audioset)
### Dataset Summary
The AudioSet ontology is a collection of sound events organized in a hierarchy. The ontology covers a wide range of everyday sounds, from human and animal sounds, to natural and environmental sounds, to musical and miscellaneous sounds.
**This repository only includes audio files for DCASE 2022 - Task 3**
The included labels are limited to:
- Female speech, woman speaking
- Male speech, man speaking
- Clapping
- Telephone
- Telephone bell ringing
- Ringtone
- Laughter
- Domestic sounds, home sounds
- Vacuum cleaner
- Kettle whistle
- Mechanical fan
- Walk, footsteps
- Door
- Cupboard open or close
- Music
- Background music
- Pop music
- Musical instrument
- Acoustic guitar
- Marimba, xylophone
- Cowbell
- Piano
- Electric piano
- Rattle (instrument)
- Water tap, faucet
- Bell
- Bicycle bell
- Chime
- Knock
### Supported Tasks and Leaderboards
- `audio-classification`: The dataset can be used to train a model for Sound Event Detection/Localization.
**The recordings only includes the single channel audio. For Localization tasks, it will required to apply RIR information**
### Languages
None
## Dataset Structure
### Data Instances
**WIP**
```
{
'file':
}
```
### Data Fields
- file: A path to the downloaded audio file in .mp3 format.
### Data Splits
This dataset only includes audio file from the unbalance train list.
The data comprises two splits: weak labels and strong labels.
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
The dataset was initially downloaded by Nelson Yalta (nelson.yalta@ieee.org).
### Licensing Information
[CC BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0)
### Citation Information
```
@inproceedings{45857,
title = {Audio Set: An ontology and human-labeled dataset for audio events},
author = {Jort F. Gemmeke and Daniel P. W. Ellis and Dylan Freedman and Aren Jansen and Wade Lawrence and R. Channing Moore and Manoj Plakal and Marvin Ritter},
year = {2017},
booktitle = {Proc. IEEE ICASSP 2017},
address = {New Orleans, LA}
}
```
| 4,348 | [
[
-0.043212890625,
-0.021575927734375,
0.0135040283203125,
0.007068634033203125,
-0.00240325927734375,
-0.01117706298828125,
-0.0321044921875,
-0.03948974609375,
0.0291595458984375,
0.040679931640625,
-0.0806884765625,
-0.07708740234375,
-0.0347900390625,
-0.0... |
strombergnlp/bajer_danish_misogyny | 2023-05-16T04:08:50.000Z | [
"task_categories:text-classification",
"task_ids:hate-speech-detection",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:da",
"license:other",
"not-for-all-audiences",
"region:u... | strombergnlp | null | null | 0 | 3 | 2022-05-11T10:06:59 | ---
annotations_creators:
- expert-generated
language_creators:
- found
language: da
license: other
multilinguality:
- monolingual
pretty_name: 'BAJER: Annotations for Misogyny'
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- hate-speech-detection
tags:
- not-for-all-audiences
extra_gated_prompt: "To receive a copy of the BAJER Dataset, the Researcher(s) must observe the restrictions listed below. In addition to other possible remedies, failure to observe these restrictions may result in revocation of permission to use the data as well as denial of access to additional material. By accessing this dataset you agrees to the following restrictions on the BAJER Dataset: **Purpose.** The Dataset will be used for research and/or statistical purposes only. **Redistribution** The Dataset, in whole or in part, will not be further distributed, published, copied, or disseminated in any way or form whatsoever, whether for profit or not. The Researcher(s) is solely liable for all claims, losses, damages, costs, fees, and expenses resulting from their disclosure of the data. **Modification and Commercial Use** The Dataset, in whole or in part, will not be modified or used for commercial purposes. The right granted herein is specifically for the internal research purposes of Researcher(s), and Researcher(s) shall not duplicate or use the disclosed Database or its contents either directly or indirectly for commercialization or any other direct for-profit purpose. **Storage** The Researcher(s) must ensure that the data is stored and processed in a manner that ensures appropriate security of the personal data, including protection against unauthorised or unlawful processing and against accidental loss, destruction or damage, using appropriate technical or organisational measures in accordance with the GDPR. **Disclaimers** The Database has been developed as part of research conducted at ITU Copenhagen. The Database is experimental in nature and is made available “as is” without obligation by ITU Copenhagen to provide accompanying services or support. The entire risk as to the quality and
performance of the Database is with Researcher(s). **Governing law and indemnification** This agreement is governed by Danish law. To the extent allowed by law, the Researcher(s) shall indemnify and hold harmless ITU against any and all claims, losses, damages, costs, fees, and expenses resulting from Researcher(s) possession and/or use of the Dataset."
extra_gated_fields:
Your name and title: text
Organisation name: text
Organisation / Researcher Address: text
Contact e-mail address: text
extra_gated_heading: "Acknowledge ITU clearance agreement for the BAJER Dataset to access the repository"
extra_gated_button_content: "Accept license"
---
# Dataset Card for "Bajer"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://stromberg.ai/publication/aom/](https://stromberg.ai/publication/aom/)
- **Repository:** [https://github.com/StrombergNLP/Online-Misogyny-in-Danish-Bajer](https://github.com/StrombergNLP/Online-Misogyny-in-Danish-Bajer)
- **Paper:** [https://aclanthology.org/2021.acl-long.247/](https://aclanthology.org/2021.acl-long.247/)
- **Point of Contact:** [Leon Derczynski](https://github.com/leondz)
- **Size of downloaded dataset files:** 7.29 MiB
- **Size of the generated dataset:** 6.57 MiB
- **Total amount of disk used:** 13.85 MiB
### Dataset Summary
This is a high-quality dataset of annotated posts sampled from social
media posts and annotated for misogyny. Danish language.
Online misogyny, a category of online abusive language, has serious and
harmful social consequences. Automatic detection of misogynistic language
online, while imperative, poses complicated challenges to both data
gathering, data annotation, and bias mitigation, as this type of data is
linguistically complex and diverse.
See the accompanying ACL paper [Annotating Online Misogyny](https://aclanthology.org/2021.acl-long.247/) for full details.
### Supported Tasks and Leaderboards
*
### Languages
Danish (`bcp47:da`)
## Dataset Structure
### Data Instances
#### Bajer
- **Size of downloaded dataset files:** 7.29 MiB
- **Size of the generated dataset:** 6.57 MiB
- **Total amount of disk used:** 13.85 MiB
An example of 'train' looks as follows.
```
{
'id': '0',
'dataset_id': '0',
'label_id': '0',
'text': 'Tilfældigt hva, din XXXXXXXXXX 🤬🤬🤬',
'sampling': 'keyword_twitter',
'subtask_A': 1,
'subtask_B': 0,
'subtask_C1': 3,
'subtask_C2': 6
}
```
### Data Fields
- `id`: a `string` feature, unique identifier in this dataset.
- `dataset_id`: a `string` feature, internal annotation identifier.
- `label_id`: a `string` feature, internal annotation sequence number.
- `text`: a `string` of the text that's annotated.
- `sampling`: a `string` describing which sampling technique surfaced this message
- `subtask_A`: is the text abusive `ABUS` or not `NOT`? `0: NOT, 1: ABUS`
- `subtask_B`: for abusive text, what's the target - individual `IND`, group `GRP`, other `OTH`, or untargeted `UNT`? `0: IND, 1: GRP, 2: OTH, 3: UNT, 4: not applicable`
- `subtask_C1`: for group-targeted abuse, what's the group - misogynistic `SEX`, other `OTH`, or racist `RAC`? `0: SEX, 1: OTH, 2: RAC, 3: not applicable`
- `subtask_C2`: for misogyny, is it neosexist `NEOSEX`, discrediting `DISCREDIT`, normative stereotyping `NOR`, benevolent sexism `AMBIVALENT`, dominance `DOMINANCE`, or harassment `HARASSMENT`? `0: NEOSEX, 1: DISCREDIT, 2: NOR, 3: AMBIVALENT, 4: DOMINANCE, 5: HARASSMENT, 6: not applicable`
### Data Splits
| name |train|
|---------|----:|
|bajer|27880 sentences|
## Dataset Creation
### Curation Rationale
The goal was to collect data for developing an annotation schema of online misogyny.
Random sampling of text often results in scarcity of examples of specifically misogynistic content (e.g. (Wulczyn et al., 2017;
Founta et al., 2018)). Therefore, we used the common alternative of collecting data by using predefined keywords with a potentially high search hit
(e.g. Waseem and Hovy (2016)), and identifying
relevant user-profiles (e.g. (Anzovino et al., 2018))
and related topics (e.g. (Kumar et al., 2018)).
We searched for keywords (specific slurs, hashtags), that are known to occur in sexist posts. These
were defined by previous work, a slur list from
Reddit, and from interviews and surveys of online
misogyny among women. We also searched for
broader terms like “sex” or “women”, which do
not appear exclusively in a misogynistic context,
for example in the topic search, where we gathered
relevant posts and their comments from the social
media pages of public media. A complete list of
keywords can be found in the appendix.
Social media provides a potentially biased, but
broad snapshot of online human discourse, with
plenty of language and behaviours represented. Following best practice guidelines (Vidgen and Derczynski, 2020), we sampled from a language for
which there are no existing annotations of the target
phenomenon: Danish.
Different social media platforms attract different user groups and can exhibit domain-specific
language (Karan and Snajder ˇ , 2018). Rather than
choosing one platform (existing misogyny datasets
are primarily based on Twitter and Reddit (Guest
et al., 2021)), we sampled from multiple platforms:
Statista (2020) shows that the platform where most
Danish users are present is Facebook, followed
by Twitter, YouTube, Instagram and lastly, Reddit.
The dataset was sampled from Twitter, Facebook
and Reddit posts as plain text.
### Source Data
#### Initial Data Collection and Normalization
The dataset was sampled from Twitter, Facebook
and Reddit posts as plain text. Data was gathered based on: keyword-based search (i.e. purposive sampling); topic-based search; and content from specific users.
#### Who are the source language producers?
Danish-speaking social media users
### Annotations
#### Annotation process
In annotating our dataset, we built on the MATTER
framework (Pustejovsky and Stubbs, 2012) and use
the variation presented by Finlayson and Erjavec
(2017) (the MALER framework), where the Train & Test stages are replaced by Leveraging of annotations for one’s particular goal, in our case the
creation of a comprehensive taxonomy.
We created a set of guidelines for the annotators.
The annotators were first asked to read the guidelines and individually annotate about 150 different
posts, after which there was a shared discussion.
After this pilot round, the volume of samples per annotator was increased and every sample labeled by
2-3 annotators. When instances were ‘flagged’ or
annotators disagreed on them, they were discussed
during weekly meetings, and misunderstandings
were resolved together with the external facilitator. After round three, when reaching 7k annotated
posts (Figure 2), we continued with independent
annotations maintaining a 15% instance overlap
between randomly picked annotator pairs.
Management of annotator disagreement is an important part of the process design. Disagreements
can be solved by majority voting (Davidson et al.,
2017; Wiegand et al., 2019), labeled as abuse if at
least one annotator has labeled it (Golbeck et al.,
2017) or by a third objective instance (Gao and
Huang, 2017). Most datasets use crowdsourcing
platforms or a few academic experts for annotation
(Vidgen and Derczynski, 2020). Inter-annotatoragreement (IAA) and classification performance
are established as two grounded evaluation measurements for annotation quality (Vidgen and Derczynski, 2020). Comparing the performance of amateur annotators (while providing guidelines) with
expert annotators for sexism and racism annotation,
Waseem (2016) show that the quality of amateur
annotators is competitive with expert annotations
when several amateurs agree. Facing the trade-off
between training annotators intensely and the number of involved annotators, we continued with the
trained annotators and group discussions/ individual revisions for flagged content and disagreements
(Section 5.4).
#### Who are the annotators?
---|---
Gender|6 female, 2 male (8 total)
Age:| 5 <30; 3 ≥30
Ethnicity:| 5 Danish: 1 Persian, 1 Arabic, 1 Polish
Study/occupation: | Linguistics (2); Health/Software Design; Ethnography/Digital Design; Communication/Psychology; Anthropology/Broadcast Moderator; Ethnography/Climate Change; Film Artist
### Personal and Sensitive Information
Usernames and PII were stripped during annotation process by skipping content containing these and eliding it from the final dataset
## Considerations for Using the Data
### Social Impact of Dataset
The data contains abusive language. It may be possible to identify original speakers based on the content, so the data is only available for research purposes under a restrictive license and conditions. We hope that identifying sexism can help moderators. There is a possibility that the content here could be used to generate misogyny in Danish, which would place women in Denmark in an even more hostile environment, and for this reason data access is restricted and tracked.
### Discussion of Biases
We have taken pains to mitigate as many biases as we were aware of in this work.
**Selection biases:** Selection biases for abusive
language can be seen in the sampling of text, for instance when using keyword search (Wiegand et al.,
2019), topic dependency (Ousidhoum et al., 2020), users (Wiegand et al., 2019), domain (Wiegand
et al., 2019), time (Florio et al., 2020) and lack of
linguistic variety (Vidgen and Derczynski, 2020).
**Label biases:** Label biases can be caused by, for
instance, non-representative annotator selection,
lack in training/domain expertise, preconceived
notions, or pre-held stereotypes. These biases are
treated in relation to abusive language datasets
by several sources, e.g. general sampling and
annotators biases (Waseem, 2016; Al Kuwatly
et al., 2020), biases towards minority identity
mentions based for example on gender or race
(Davidson et al., 2017; Dixon et al., 2018; Park
et al., 2018; Davidson et al., 2019), and political
annotator biases (Wich et al., 2020). Other qualitative biases comprise, for instance, demographic
bias, over-generalization, topic exposure as social
biases (Hovy and Spruit, 2016).
We applied several measures to mitigate biases
occurring through the annotation design and execution: First, we selected labels grounded in existing,
peer-reviewed research from more than one field.
Second, we aimed for diversity in annotator profiles
in terms of age, gender, dialect, and background.
Third, we recruited a facilitator with a background
in ethnographic studies and provided intense annotator training. Fourth, we engaged in weekly group
discussions, iteratively improving the codebook
and integrating edge cases. Fifth, the selection of
platforms from which we sampled data is based on
local user representation in Denmark, rather than
convenience. Sixth, diverse sampling methods for
data collection reduced selection biases.
### Other Known Limitations
The data is absolutely NOT a reasonable or in any way stratified sample of social media text, so class prevalence/balance here says nothing about incidences of these phenomena in the wild. That said, we hypothesis that the distribution of types of misogyny in this data (subtask C2) is roughly representative of how misogyny presents on the studied platforms.
## Additional Information
### Dataset Curators
The dataset is curated by the paper's authors and the ethnographer-led annotation team.
### Licensing Information
The data is licensed under a restrictive usage agreement. [Apply for access here](https://forms.gle/MPdV8FG8EUuS1MdS6)
### Citation Information
```
@inproceedings{zeinert-etal-2021-annotating,
title = "Annotating Online Misogyny",
author = "Zeinert, Philine and
Inie, Nanna and
Derczynski, Leon",
booktitle = "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.acl-long.247",
doi = "10.18653/v1/2021.acl-long.247",
pages = "3181--3197",
}
```
### Contributions
Author-added dataset [@leondz](https://github.com/leondz)
| 15,581 | [
[
-0.04315185546875,
-0.065185546875,
0.00799560546875,
0.024932861328125,
-0.0271148681640625,
0.00039649009704589844,
-0.0222320556640625,
-0.0404052734375,
0.025177001953125,
0.0309600830078125,
-0.036376953125,
-0.058929443359375,
-0.053009033203125,
0.026... |
Chr0my/freesound.org | 2023-04-09T14:31:11.000Z | [
"size_categories:100K<n<1M",
"language:en",
"music",
"region:us"
] | Chr0my | null | null | 9 | 3 | 2022-05-15T17:31:35 | ---
language:
- en
tags:
- music
size_categories:
- 100K<n<1M
---
This dataset has been scraped from https://freesound.org
Containing 554849 audio clips.
License: cc-by-sa-3.0, https://creativecommons.org/licenses/by-sa/3.0/ | 227 | [
[
-0.0338134765625,
-0.0163421630859375,
0.03289794921875,
0.0312042236328125,
-0.0259857177734375,
-0.0140380859375,
0.002773284912109375,
-0.0191802978515625,
0.0299224853515625,
0.040802001953125,
-0.057403564453125,
-0.0400390625,
-0.026885986328125,
0.001... |
bigscience-data/roots_ar_labr | 2022-12-12T10:59:59.000Z | [
"language:ar",
"license:gpl-2.0",
"region:us"
] | bigscience-data | null | null | 0 | 3 | 2022-05-18T09:06:23 | ---
language: ar
license: gpl-2.0
extra_gated_prompt: 'By accessing this dataset, you agree to abide by the BigScience
Ethical Charter. The charter can be found at:
https://hf.co/spaces/bigscience/ethical-charter'
extra_gated_fields:
I have read and agree to abide by the BigScience Ethical Charter: checkbox
---
ROOTS Subset: roots_ar_labr
# labr
- Dataset uid: `labr`
### Description
### Homepage
### Licensing
### Speaker Locations
### Sizes
- 0.0076 % of total
- 0.0701 % of ar
### BigScience processing steps
#### Filters applied to: ar
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
- filter_small_docs_bytes_300
| 665 | [
[
-0.0428466796875,
-0.03387451171875,
0.03271484375,
0.0049591064453125,
-0.02520751953125,
-0.0028781890869140625,
0.006011962890625,
0.0249176025390625,
0.03131103515625,
0.03814697265625,
-0.031280517578125,
-0.0643310546875,
-0.0396728515625,
0.0093002319... |
bigscience-data/roots_ar_wikinews | 2022-12-12T11:00:04.000Z | [
"language:ar",
"license:cc-by-sa-3.0",
"region:us"
] | bigscience-data | null | null | 0 | 3 | 2022-05-18T09:06:27 | ---
language: ar
license: cc-by-sa-3.0
extra_gated_prompt: 'By accessing this dataset, you agree to abide by the BigScience
Ethical Charter. The charter can be found at:
https://hf.co/spaces/bigscience/ethical-charter'
extra_gated_fields:
I have read and agree to abide by the BigScience Ethical Charter: checkbox
---
ROOTS Subset: roots_ar_wikinews
# wikinews_filtered
- Dataset uid: `wikinews_filtered`
### Description
### Homepage
### Licensing
### Speaker Locations
### Sizes
- 0.0307 % of total
- 0.0701 % of ar
- 0.3036 % of pt
- 0.0271 % of en
- 0.0405 % of fr
- 0.2119 % of indic-ta
- 0.0081 % of zh
- 0.0510 % of es
- 0.0725 % of ca
### BigScience processing steps
#### Filters applied to: ar
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- filter_remove_empty_docs
- split_sentences_ar
- dedup_template_soft
- replace_newline_with_space
- filter_small_docs_bytes_300
#### Filters applied to: pt
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- filter_remove_empty_docs
- split_sentences_pt
- dedup_template_soft
- replace_newline_with_space
- filter_small_docs_bytes_300
#### Filters applied to: en
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- filter_remove_empty_docs
- split_sentences_en
- dedup_template_soft
- replace_newline_with_space
- filter_small_docs_bytes_1024
#### Filters applied to: fr
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
- replace_newline_with_space
- filter_small_docs_bytes_1024
#### Filters applied to: indic-ta
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- filter_remove_empty_docs
- split_sentences_indic-ta
- dedup_template_soft
- replace_newline_with_space
- filter_small_docs_bytes_300
#### Filters applied to: zh
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- filter_remove_empty_docs
- split_sentences_zhs
- dedup_template_soft
- replace_newline_with_space
- filter_small_docs_bytes_1024
#### Filters applied to: es
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- filter_remove_empty_docs
- split_sentences_es
- dedup_template_soft
- replace_newline_with_space
- filter_small_docs_bytes_1024
#### Filters applied to: ca
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- filter_remove_empty_docs
- split_sentences_ca
- dedup_template_soft
- replace_newline_with_space
- filter_small_docs_bytes_1024
| 2,536 | [
[
-0.039215087890625,
-0.040283203125,
0.0234527587890625,
0.01474761962890625,
-0.00885772705078125,
-0.006687164306640625,
-0.01154327392578125,
0.0029811859130859375,
0.046173095703125,
0.03094482421875,
-0.054779052734375,
-0.06549072265625,
-0.04718017578125,... |
bigscience-data/roots_ar_wikiquote | 2022-12-12T11:00:10.000Z | [
"language:ar",
"license:cc-by-sa-3.0",
"region:us"
] | bigscience-data | null | null | 0 | 3 | 2022-05-18T09:06:27 | ---
language: ar
license: cc-by-sa-3.0
extra_gated_prompt: 'By accessing this dataset, you agree to abide by the BigScience
Ethical Charter. The charter can be found at:
https://hf.co/spaces/bigscience/ethical-charter'
extra_gated_fields:
I have read and agree to abide by the BigScience Ethical Charter: checkbox
---
ROOTS Subset: roots_ar_wikiquote
# wikiquote_filtered
- Dataset uid: `wikiquote_filtered`
### Description
### Homepage
### Licensing
### Speaker Locations
### Sizes
- 0.0462 % of total
- 0.1697 % of en
- 0.0326 % of fr
- 0.0216 % of ar
- 0.0066 % of zh
- 0.0833 % of pt
- 0.0357 % of es
- 0.0783 % of indic-ta
- 0.0361 % of indic-hi
- 0.0518 % of ca
- 0.0405 % of vi
- 0.0834 % of indic-ml
- 0.0542 % of indic-te
- 0.1172 % of indic-gu
- 0.0634 % of indic-kn
- 0.0539 % of id
- 0.0454 % of indic-ur
- 0.0337 % of indic-mr
- 0.0347 % of eu
### BigScience processing steps
#### Filters applied to: en
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- filter_remove_empty_docs
- split_sentences_en
- dedup_template_soft
- replace_newline_with_space
- filter_small_docs_bytes_1024
#### Filters applied to: fr
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- filter_remove_empty_docs
- split_sentences_fr
- dedup_template_soft
- replace_newline_with_space
- filter_small_docs_bytes_1024
#### Filters applied to: ar
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- filter_remove_empty_docs
- split_sentences_ar
- dedup_template_soft
- replace_newline_with_space
- filter_small_docs_bytes_300
#### Filters applied to: zh
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- filter_remove_empty_docs
- split_sentences_zhs
- dedup_template_soft
- replace_newline_with_space
- filter_small_docs_bytes_1024
#### Filters applied to: pt
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- filter_remove_empty_docs
- split_sentences_pt
- dedup_template_soft
- replace_newline_with_space
- filter_small_docs_bytes_300
#### Filters applied to: es
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- filter_remove_empty_docs
- split_sentences_es
- dedup_template_soft
- replace_newline_with_space
- filter_small_docs_bytes_1024
#### Filters applied to: indic-ta
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- filter_remove_empty_docs
- split_sentences_indic-ta
- dedup_template_soft
- replace_newline_with_space
- filter_small_docs_bytes_300
#### Filters applied to: indic-hi
- dedup_document
- filter_remove_empty_docs
- split_sentences_indic-hi
- dedup_template_soft
- filter_small_docs_bytes_300
#### Filters applied to: ca
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- filter_remove_empty_docs
- split_sentences_ca
- dedup_template_soft
- replace_newline_with_space
- filter_small_docs_bytes_1024
#### Filters applied to: vi
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- filter_remove_empty_docs
- split_sentences_vi
- dedup_template_soft
- replace_newline_with_space
- filter_small_docs_bytes_300
#### Filters applied to: indic-ml
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- filter_remove_empty_docs
- split_sentences_indic-ml
- dedup_template_soft
- replace_newline_with_space
- filter_small_docs_bytes_300
#### Filters applied to: indic-te
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- filter_remove_empty_docs
- split_sentences_indic-te
- dedup_template_soft
- replace_newline_with_space
- filter_small_docs_bytes_300
#### Filters applied to: indic-gu
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- filter_remove_empty_docs
- split_sentences_indic-gu
- dedup_template_soft
- replace_newline_with_space
- filter_small_docs_bytes_300
#### Filters applied to: indic-kn
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- filter_remove_empty_docs
- split_sentences_indic-kn
- dedup_template_soft
- replace_newline_with_space
- filter_small_docs_bytes_300
#### Filters applied to: id
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- filter_remove_empty_docs
- split_sentences_id
- dedup_template_soft
- replace_newline_with_space
- filter_small_docs_bytes_300
#### Filters applied to: indic-ur
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-mr
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- filter_remove_empty_docs
- split_sentences_indic-mr
- dedup_template_soft
- replace_newline_with_space
- filter_small_docs_bytes_300
#### Filters applied to: eu
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- filter_remove_empty_docs
- split_sentences_eu
- dedup_template_soft
- replace_newline_with_space
| 4,987 | [
[
-0.042724609375,
-0.051361083984375,
0.0170745849609375,
0.01336669921875,
-0.0034027099609375,
0.00725555419921875,
-0.01247406005859375,
-0.015838623046875,
0.0472412109375,
0.0212860107421875,
-0.047882080078125,
-0.058868408203125,
-0.045654296875,
0.033... |
bigscience-data/roots_ar_wikiversity | 2022-12-12T11:00:16.000Z | [
"language:ar",
"license:cc-by-sa-3.0",
"region:us"
] | bigscience-data | null | null | 0 | 3 | 2022-05-18T09:06:27 | ---
language: ar
license: cc-by-sa-3.0
extra_gated_prompt: 'By accessing this dataset, you agree to abide by the BigScience
Ethical Charter. The charter can be found at:
https://hf.co/spaces/bigscience/ethical-charter'
extra_gated_fields:
I have read and agree to abide by the BigScience Ethical Charter: checkbox
---
ROOTS Subset: roots_ar_wikiversity
# wikiversity_filtered
- Dataset uid: `wikiversity_filtered`
### Description
### Homepage
### Licensing
### Speaker Locations
### Sizes
- 0.0367 % of total
- 0.1050 % of en
- 0.1178 % of fr
- 0.1231 % of pt
- 0.0072 % of zh
- 0.0393 % of es
- 0.0076 % of ar
- 0.0069 % of indic-hi
### BigScience processing steps
#### Filters applied to: en
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- filter_remove_empty_docs
- split_sentences_en
- dedup_template_soft
- replace_newline_with_space
- filter_small_docs_bytes_1024
#### Filters applied to: fr
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- filter_remove_empty_docs
- split_sentences_fr
- dedup_template_soft
- replace_newline_with_space
- filter_small_docs_bytes_1024
#### Filters applied to: pt
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- filter_remove_empty_docs
- split_sentences_pt
- dedup_template_soft
- replace_newline_with_space
- filter_small_docs_bytes_300
#### Filters applied to: zh
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- filter_remove_empty_docs
- split_sentences_zhs
- dedup_template_soft
- replace_newline_with_space
- filter_small_docs_bytes_1024
#### Filters applied to: es
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- filter_remove_empty_docs
- split_sentences_es
- dedup_template_soft
- replace_newline_with_space
- filter_small_docs_bytes_1024
#### Filters applied to: ar
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- filter_remove_empty_docs
- split_sentences_ar
- dedup_template_soft
- replace_newline_with_space
- filter_small_docs_bytes_300
#### Filters applied to: indic-hi
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- filter_remove_empty_docs
- split_sentences_indic-hi
- dedup_template_soft
- replace_newline_with_space
- filter_small_docs_bytes_300
| 2,318 | [
[
-0.045928955078125,
-0.034271240234375,
0.01739501953125,
0.01422882080078125,
-0.009521484375,
-0.00704193115234375,
-0.015716552734375,
-0.00042128562927246094,
0.0457763671875,
0.032470703125,
-0.05889892578125,
-0.05828857421875,
-0.049468994140625,
0.02... |
bigscience-data/roots_ar_wikisource | 2022-12-12T11:00:32.000Z | [
"language:ar",
"license:cc-by-sa-3.0",
"region:us"
] | bigscience-data | null | null | 0 | 3 | 2022-05-18T09:06:32 | ---
language: ar
license: cc-by-sa-3.0
extra_gated_prompt: 'By accessing this dataset, you agree to abide by the BigScience
Ethical Charter. The charter can be found at:
https://hf.co/spaces/bigscience/ethical-charter'
extra_gated_fields:
I have read and agree to abide by the BigScience Ethical Charter: checkbox
---
ROOTS Subset: roots_ar_wikisource
# wikisource_filtered
- Dataset uid: `wikisource_filtered`
### Description
### Homepage
### Licensing
### Speaker Locations
### Sizes
- 2.6306 % of total
- 12.7884 % of fr
- 19.8886 % of indic-bn
- 20.9966 % of indic-ta
- 2.3478 % of ar
- 4.7068 % of indic-hi
- 18.0998 % of indic-te
- 1.7155 % of es
- 19.4800 % of indic-kn
- 9.1737 % of indic-ml
- 17.1771 % of indic-mr
- 17.1870 % of indic-gu
- 70.3687 % of indic-as
- 1.0165 % of pt
- 7.8642 % of indic-pa
- 1.3501 % of vi
- 4.9411 % of indic-or
- 0.5307 % of ca
- 2.3593 % of id
- 1.5928 % of eu
### BigScience processing steps
#### Filters applied to: fr
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
- filter_small_docs_bytes_1024
#### Filters applied to: indic-bn
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-ta
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: ar
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-hi
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-te
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: es
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
- filter_small_docs_bytes_1024
#### Filters applied to: indic-kn
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
- remove_wiki_mojibake
- filter_small_docs_bytes_300
#### Filters applied to: indic-ml
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-mr
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-gu
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-as
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
#### Filters applied to: pt
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-pa
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: vi
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-or
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
#### Filters applied to: ca
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
- filter_small_docs_bytes_1024
#### Filters applied to: id
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: eu
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
| 4,382 | [
[
-0.050262451171875,
-0.0421142578125,
0.022003173828125,
0.01092529296875,
-0.00756072998046875,
0.00033926963806152344,
-0.00972747802734375,
-0.01471710205078125,
0.043182373046875,
0.0207366943359375,
-0.052154541015625,
-0.059478759765625,
-0.0408935546875,
... |
bigscience-data/roots_ar_wikibooks | 2022-12-12T11:02:12.000Z | [
"language:ar",
"license:cc-by-sa-3.0",
"region:us"
] | bigscience-data | null | null | 0 | 3 | 2022-05-18T09:07:28 | ---
language: ar
license: cc-by-sa-3.0
extra_gated_prompt: 'By accessing this dataset, you agree to abide by the BigScience
Ethical Charter. The charter can be found at:
https://hf.co/spaces/bigscience/ethical-charter'
extra_gated_fields:
I have read and agree to abide by the BigScience Ethical Charter: checkbox
---
ROOTS Subset: roots_ar_wikibooks
# wikibooks_filtered
- Dataset uid: `wikibooks_filtered`
### Description
### Homepage
### Licensing
### Speaker Locations
### Sizes
- 0.0897 % of total
- 0.2591 % of en
- 0.0965 % of fr
- 0.1691 % of es
- 0.2834 % of indic-hi
- 0.2172 % of pt
- 0.0149 % of zh
- 0.0279 % of ar
- 0.1374 % of vi
- 0.5025 % of id
- 0.3694 % of indic-ur
- 0.5744 % of eu
- 0.0769 % of ca
- 0.0519 % of indic-ta
- 0.1470 % of indic-mr
- 0.0751 % of indic-te
- 0.0156 % of indic-bn
- 0.0476 % of indic-ml
- 0.0087 % of indic-pa
### BigScience processing steps
#### Filters applied to: en
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- filter_remove_empty_docs
- split_sentences_en
- dedup_template_soft
- replace_newline_with_space
- filter_small_docs_bytes_1024
#### Filters applied to: fr
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- filter_remove_empty_docs
- split_sentences_fr
- dedup_template_soft
- replace_newline_with_space
- filter_small_docs_bytes_1024
#### Filters applied to: es
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- filter_remove_empty_docs
- split_sentences_es
- dedup_template_soft
- replace_newline_with_space
- filter_small_docs_bytes_1024
#### Filters applied to: indic-hi
- dedup_document
- filter_remove_empty_docs
- split_sentences_indic-hi
- dedup_template_soft
- filter_small_docs_bytes_300
#### Filters applied to: pt
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- filter_remove_empty_docs
- split_sentences_pt
- dedup_template_soft
- replace_newline_with_space
- filter_small_docs_bytes_300
#### Filters applied to: zh
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- filter_remove_empty_docs
- split_sentences_zhs
- dedup_template_soft
- replace_newline_with_space
- filter_small_docs_bytes_1024
#### Filters applied to: ar
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- filter_remove_empty_docs
- split_sentences_ar
- dedup_template_soft
- replace_newline_with_space
- filter_small_docs_bytes_300
#### Filters applied to: vi
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- filter_remove_empty_docs
- split_sentences_vi
- dedup_template_soft
- replace_newline_with_space
- filter_small_docs_bytes_300
#### Filters applied to: id
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- filter_remove_empty_docs
- split_sentences_id
- dedup_template_soft
- replace_newline_with_space
- filter_small_docs_bytes_300
#### Filters applied to: indic-ur
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: eu
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- filter_remove_empty_docs
- split_sentences_eu
- dedup_template_soft
- replace_newline_with_space
#### Filters applied to: ca
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- filter_remove_empty_docs
- split_sentences_ca
- dedup_template_soft
- replace_newline_with_space
- filter_small_docs_bytes_1024
#### Filters applied to: indic-ta
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- filter_remove_empty_docs
- split_sentences_indic-ta
- dedup_template_soft
- replace_newline_with_space
- filter_small_docs_bytes_300
#### Filters applied to: indic-mr
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- filter_remove_empty_docs
- split_sentences_indic-mr
- dedup_template_soft
- replace_newline_with_space
- filter_small_docs_bytes_300
#### Filters applied to: indic-te
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- filter_remove_empty_docs
- split_sentences_indic-te
- dedup_template_soft
- replace_newline_with_space
- filter_small_docs_bytes_300
#### Filters applied to: indic-bn
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- filter_remove_empty_docs
- split_sentences_indic-bn
- dedup_template_soft
- replace_newline_with_space
- filter_small_docs_bytes_300
#### Filters applied to: indic-ml
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- filter_remove_empty_docs
- split_sentences_indic-ml
- dedup_template_soft
- replace_newline_with_space
- filter_small_docs_bytes_300
#### Filters applied to: indic-pa
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- filter_remove_empty_docs
- split_sentences_indic-pa
- dedup_template_soft
- replace_newline_with_space
- filter_small_docs_bytes_300
| 4,987 | [
[
-0.04583740234375,
-0.048126220703125,
0.0157318115234375,
0.01302337646484375,
0.0004723072052001953,
0.0013370513916015625,
-0.01129150390625,
-0.0170745849609375,
0.044586181640625,
0.019866943359375,
-0.044830322265625,
-0.0567626953125,
-0.04180908203125,
... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.