id stringlengths 2 115 | author stringlengths 2 42 ⌀ | last_modified timestamp[us, tz=UTC] | downloads int64 0 8.87M | likes int64 0 3.84k | paperswithcode_id stringlengths 2 45 ⌀ | tags list | lastModified timestamp[us, tz=UTC] | createdAt stringlengths 24 24 | key stringclasses 1 value | created timestamp[us] | card stringlengths 1 1.01M | embedding list | library_name stringclasses 21 values | pipeline_tag stringclasses 27 values | mask_token null | card_data null | widget_data null | model_index null | config null | transformers_info null | spaces null | safetensors null | transformersInfo null | modelId stringlengths 5 111 ⌀ | embeddings list |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
yashtiwari/fleurs-hi-en-ST | yashtiwari | 2023-10-29T07:06:51Z | 13 | 0 | null | [
"region:us"
] | 2023-10-29T07:06:51Z | 2023-10-29T06:54:39.000Z | 2023-10-29T06:54:39 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: hindi
dtype: string
- name: english
dtype: string
- name: audio
struct:
- name: array
sequence: float64
- name: path
dtype: string
- name: sampling_rate
dtype: int64
splits:
- name: train
num_bytes: 1286250983
num_examples: 876
download_size: 824653765
dataset_size: 1286250983
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "fleurs-hi-en-ST"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
This is a dataset for speech to text translation of hindi to english. dataset used to build this was fleurs & flores | [
-0.327154278755188,
-0.7233777642250061,
-0.05547066405415535,
0.28462138772010803,
-0.20582886040210724,
0.15654410421848297,
-0.22283461689949036,
-0.4058632552623749,
0.8802979588508606,
0.5815989971160889,
-0.629612147808075,
-0.5654435753822327,
-0.907647430896759,
0.13791772723197937... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
MichaelVeser/finetuningopensecurity | MichaelVeser | 2023-10-29T19:31:54Z | 13 | 0 | null | [
"region:us"
] | 2023-10-29T19:31:54Z | 2023-10-29T19:31:41.000Z | 2023-10-29T19:31:41 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
marij868/memes_dataset_full | marij868 | 2023-10-30T06:13:37Z | 13 | 0 | null | [
"region:us"
] | 2023-10-30T06:13:37Z | 2023-10-30T06:12:48.000Z | 2023-10-30T06:12:48 | ---
dataset_info:
features:
- name: image
dtype: image
- name: 'Unnamed: 0'
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 5694447.0
num_examples: 98
download_size: 0
dataset_size: 5694447.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "memes_dataset_full"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.59004807472229,
-0.27251458168029785,
0.20847079157829285,
0.4459543228149414,
-0.4952157735824585,
0.07769237458705902,
0.092458575963974,
-0.019792016595602036,
1.1658440828323364,
0.5661605596542358,
-0.9179162979125977,
-0.7344390749931335,
-0.7716184258460999,
-0.05196504294872284,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
jcho02/golf | jcho02 | 2023-10-30T19:52:21Z | 13 | 0 | null | [
"region:us"
] | 2023-10-30T19:52:21Z | 2023-10-30T18:01:46.000Z | 2023-10-30T18:01:46 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
jay401521/domain_test_balance | jay401521 | 2023-10-31T05:52:07Z | 13 | 0 | null | [
"region:us"
] | 2023-10-31T05:52:07Z | 2023-10-31T05:52:02.000Z | 2023-10-31T05:52:02 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: domain
dtype:
class_label:
names:
'0': AIRL
'1': CAR
'2': COMM
'3': TECH
- name: label
dtype: int64
- name: rank
dtype: string
- name: sentence
dtype: string
splits:
- name: train
num_bytes: 1789645
num_examples: 20648
download_size: 971736
dataset_size: 1789645
---
# Dataset Card for "domain_test_balance"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6750942468643188,
-0.4161611795425415,
-0.17168204486370087,
0.24201126396656036,
-0.12465731054544449,
0.03669920191168785,
0.34965041279792786,
-0.09075452387332916,
0.708066999912262,
0.4457988142967224,
-0.9009655117988586,
-0.5623800754547119,
-0.5437023639678955,
-0.28408443927764... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
MaxReynolds/MyPatternDataset | MaxReynolds | 2023-11-15T23:29:59Z | 13 | 0 | null | [
"region:us"
] | 2023-11-15T23:29:59Z | 2023-10-31T21:05:24.000Z | 2023-10-31T21:05:24 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 1030683.0
num_examples: 38
download_size: 1018065
dataset_size: 1030683.0
---
# Dataset Card for "MyPatternDataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.740740954875946,
-0.2732699513435364,
-0.04115557298064232,
0.3438599407672882,
-0.03754416108131409,
0.0748463124036789,
0.4745157063007355,
0.08161400258541107,
1.1356562376022339,
0.49061867594718933,
-1.0904208421707153,
-0.441958487033844,
-0.7268438935279846,
-0.16045641899108887,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Skywork/mock_gsm8k_test | Skywork | 2023-11-02T03:48:52Z | 13 | 3 | null | [
"license:other",
"arxiv:2310.19341",
"region:us"
] | 2023-11-02T03:48:52Z | 2023-11-01T04:15:50.000Z | 2023-11-01T04:15:50 | ---
license: other
license_name: license
license_link: >-
https://github.com/SkyworkAI/Skywork/blob/main/Skywork%20Community%20License.pdf
---
# Introduction
This dataset is a mirror of the GSM8K Test split. We have manually ensured the answers are correct. This dataset can serve as a reference to evaluate a model's ability to generalize to math problems.
# License Agreement
The community usage of SkyPile dataset requires Skywork Community License. The SkyPile dataset supports commercial use. If you plan to use the Skywork model or its derivatives for commercial purposes, you must abide by terms and conditions within Skywork Community License as well as Apache2.0.
# Contact Us and Citation
If you find our work helpful, please feel free to cite our paper~
```
@misc{wei2023skywork,
title={Skywork: A More Open Bilingual Foundation Model},
author={Tianwen Wei and Liang Zhao and Lichang Zhang and Bo Zhu and Lijie Wang and Haihua Yang and Biye Li and Cheng Cheng and Weiwei Lü and Rui Hu and Chenxia Li and Liu Yang and Xilin Luo and Xuejie Wu and Lunan Liu and Wenjun Cheng and Peng Cheng and Jianhao Zhang and Xiaoyu Zhang and Lei Lin and Xiaokun Wang and Yutuan Ma and Chuanhai Dong and Yanqi Sun and Yifu Chen and Yongyi Peng and Xiaojuan Liang and Shuicheng Yan and Han Fang and Yahui Zhou},
year={2023},
eprint={2310.19341},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | [
-0.2687322497367859,
-0.40495848655700684,
0.232635498046875,
0.46983030438423157,
-0.11717135459184647,
-0.30816909670829773,
-0.23042292892932892,
-0.09586643427610397,
-0.02100134640932083,
0.7168529629707336,
-0.7995803356170654,
-0.4234865605831146,
-0.6152095198631287,
0.261329174041... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ambushburn/human-burn | ambushburn | 2023-11-02T11:50:05Z | 13 | 0 | null | [
"region:us"
] | 2023-11-02T11:50:05Z | 2023-11-01T20:03:24.000Z | 2023-11-01T20:03:24 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ugursahin/gnm_dataset | ugursahin | 2023-11-03T16:45:49Z | 13 | 0 | null | [
"region:us"
] | 2023-11-03T16:45:49Z | 2023-11-02T00:22:43.000Z | 2023-11-02T00:22:43 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
KETI-AIR/kor_ag_news | KETI-AIR | 2023-11-15T01:14:56Z | 13 | 0 | ag-news | [
"task_categories:text-classification",
"task_ids:topic-classification",
"size_categories:100K<n<1M",
"language:ko",
"license:unknown",
"region:us"
] | 2023-11-15T01:14:56Z | 2023-11-03T08:42:19.000Z | 2023-11-03T08:42:19 | ---
language:
- ko
size_categories:
- 100K<n<1M
task_categories:
- text-classification
task_ids:
- topic-classification
paperswithcode_id: ag-news
pretty_name: AG’s News Corpus
license: unknown
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': World
'1': Sports
'2': Business
'3': Sci/Tech
- name: data_index_by_user
dtype: int32
splits:
- name: train
num_bytes: 35075728
num_examples: 120000
- name: test
num_bytes: 2195191
num_examples: 7600
download_size: 22724153
dataset_size: 37270919
---
# Dataset Card for ag_news
## Licensing Information & Dataset Summary
AG is a collection of more than 1 million news articles. News articles have been
gathered from more than 2000 news sources by ComeToMyHead in more than 1 year of
activity. ComeToMyHead is an academic news search engine which has been running
since July, 2004. The dataset is provided by the academic comunity for research
purposes in data mining (clustering, classification, etc), information retrieval
(ranking, search, etc), xml, data compression, data streaming, and any other
non-commercial activity. For more information, please refer to the link
http://www.di.unipi.it/~gulli/AG_corpus_of_news_articles.html .
The AG's news topic classification dataset is constructed by Xiang Zhang
(xiang.zhang@nyu.edu) from the dataset above. It is used as a text
classification benchmark in the following paper: Xiang Zhang, Junbo Zhao, Yann
LeCun. Character-level Convolutional Networks for Text Classification. Advances
in Neural Information Processing Systems 28 (NIPS 2015).
## Source Data Citation INformation
```
@inproceedings{Zhang2015CharacterlevelCN,
title={Character-level Convolutional Networks for Text Classification},
author={Xiang Zhang and Junbo Jake Zhao and Yann LeCun},
booktitle={NIPS},
year={2015}
}
``` | [
-0.3467939496040344,
-0.628930389881134,
0.15607406198978424,
-0.10867870599031448,
-0.3256129324436188,
0.08952363580465317,
-0.10765289515256882,
-0.42356613278388977,
0.139753058552742,
0.18351295590400696,
-0.47314053773880005,
-0.820521891117096,
-0.7013831734657288,
0.209001228213310... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
aghent/Aerial-Semantic-Segmentation-Cactis | aghent | 2023-11-04T02:52:16Z | 13 | 0 | null | [
"task_categories:image-segmentation",
"roboflow",
"roboflow2huggingface",
"region:us"
] | 2023-11-04T02:52:16Z | 2023-11-04T02:48:23.000Z | 2023-11-04T02:48:23 | ---
task_categories:
- image-segmentation
tags:
- roboflow
- roboflow2huggingface
---
<div align="center">
<img width="640" alt="aghent/Aerial-Semantic-Segmentation-Cactis" src="https://huggingface.co/datasets/aghent/Aerial-Semantic-Segmentation-Cactis/resolve/main/thumbnail.jpg">
</div>
### Dataset Labels
```
['copiapoa', 'copiapoa-v2']
```
### Number of Images
```json
{'valid': 1060, 'test': 1013, 'train': 8028}
```
### How to Use
- Install [datasets](https://pypi.org/project/datasets/):
```bash
pip install datasets
```
- Load the dataset:
```python
from datasets import load_dataset
ds = load_dataset("aghent/Aerial-Semantic-Segmentation-Cactis", name="full")
example = ds['train'][0]
```
### Roboflow Dataset Page
[https://universe.roboflow.com/uai-63qde/instance-segmentation-kgvep/dataset/1](https://universe.roboflow.com/uai-63qde/instance-segmentation-kgvep/dataset/1?ref=roboflow2huggingface)
### Citation
```
@misc{ instance-segmentation-kgvep_dataset,
title = { Instance Segmentation Dataset },
type = { Open Source Dataset },
author = { UAI },
howpublished = { \\url{ https://universe.roboflow.com/uai-63qde/instance-segmentation-kgvep } },
url = { https://universe.roboflow.com/uai-63qde/instance-segmentation-kgvep },
journal = { Roboflow Universe },
publisher = { Roboflow },
year = { 2023 },
month = { nov },
note = { visited on 2023-11-04 },
}
```
### License
CC BY 4.0
### Dataset Summary
This dataset was exported via roboflow.com on November 4, 2023 at 2:50 AM GMT
Roboflow is an end-to-end computer vision platform that helps you
* collaborate with your team on computer vision projects
* collect & organize images
* understand and search unstructured image data
* annotate, and create datasets
* export, train, and deploy computer vision models
* use active learning to improve your dataset over time
For state of the art Computer Vision training notebooks you can use with this dataset,
visit https://github.com/roboflow/notebooks
To find over 100k other datasets and pre-trained models, visit https://universe.roboflow.com
The dataset includes 10101 images.
Cactis are annotated in COCO format.
The following pre-processing was applied to each image:
No image augmentation techniques were applied.
| [
-0.3007683753967285,
-0.5285221338272095,
0.3816051781177521,
-0.0011533773504197598,
-0.28935694694519043,
-0.17796239256858826,
0.047747816890478134,
-0.44684091210365295,
0.3442033529281616,
0.1580026000738144,
-0.7633887529373169,
-0.6067357063293457,
-0.3180299401283264,
0.00655718566... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
gugaio/dokki-pagamentos | gugaio | 2023-11-22T00:59:08Z | 13 | 0 | null | [
"region:us"
] | 2023-11-22T00:59:08Z | 2023-11-04T15:49:07.000Z | 2023-11-04T15:49:07 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
KenBars/item_rec | KenBars | 2023-11-04T21:38:09Z | 13 | 0 | null | [
"license:mit",
"region:us"
] | 2023-11-04T21:38:09Z | 2023-11-04T20:40:01.000Z | 2023-11-04T20:40:01 | ---
license: mit
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
maxolotl/must-c-en-de-wait7-01 | maxolotl | 2023-11-05T10:15:02Z | 13 | 0 | null | [
"region:us"
] | 2023-11-05T10:15:02Z | 2023-11-05T10:14:40.000Z | 2023-11-05T10:14:40 | ---
dataset_info:
features:
- name: current_source
dtype: string
- name: current_target
dtype: string
- name: target_token
dtype: string
splits:
- name: train
num_bytes: 883194948
num_examples: 4513829
- name: test
num_bytes: 10870283
num_examples: 57041
- name: validation
num_bytes: 5438711
num_examples: 26843
download_size: 156235216
dataset_size: 899503942
---
# Dataset Card for "must-c-en-de-wait7-01"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.7363930940628052,
-0.12224447727203369,
0.4129554033279419,
0.6263239979743958,
-0.29765498638153076,
-0.12507246434688568,
0.335721880197525,
-0.3666287958621979,
0.8254390954971313,
0.5950464010238647,
-0.9810617566108704,
-0.7142860889434814,
-0.7110733985900879,
0.25787630677223206,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
yuhsinchan/nmsqa_full | yuhsinchan | 2023-11-05T13:49:07Z | 13 | 0 | null | [
"region:us"
] | 2023-11-05T13:49:07Z | 2023-11-05T13:47:39.000Z | 2023-11-05T13:47:39 | ---
dataset_info:
features:
- name: context_code
sequence: int16
- name: context_cnt
sequence: int16
- name: question_code
sequence: int16
- name: question_cnt
sequence: int16
- name: start_idx
dtype: int64
- name: end_idx
dtype: int64
splits:
- name: train
num_bytes: 506838688
num_examples: 87075
- name: dev
num_bytes: 62621788
num_examples: 10493
download_size: 152779086
dataset_size: 569460476
---
# Dataset Card for "nmsqa_full"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5798580050468445,
0.001052502659149468,
0.3119511008262634,
0.1771608144044876,
-0.34645864367485046,
0.1090598851442337,
0.330203652381897,
0.09917645901441574,
1.0637006759643555,
0.6585066914558411,
-0.8321242332458496,
-0.7749767899513245,
-0.5429749488830566,
-0.1349746137857437,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
autumnjohnson/minds14 | autumnjohnson | 2023-11-06T07:41:29Z | 13 | 0 | null | [
"task_categories:automatic-speech-recognition",
"task_ids:keyword-spotting",
"annotations_creators:expert-generated",
"annotations_creators:crowdsourced",
"annotations_creators:machine-generated",
"language_creators:crowdsourced",
"language_creators:expert-generated",
"multilinguality:multilingual",
... | 2023-11-06T07:41:29Z | 2023-11-06T02:46:40.000Z | 2023-11-06T02:46:40 | ---
annotations_creators:
- expert-generated
- crowdsourced
- machine-generated
language_creators:
- crowdsourced
- expert-generated
language:
- en
- fr
- it
- es
- pt
- de
- nl
- ru
- pl
- cs
- ko
- zh
language_bcp47:
- en
- en-GB
- en-US
- en-AU
- fr
- it
- es
- pt
- de
- nl
- ru
- pl
- cs
- ko
- zh
license:
- cc-by-4.0
multilinguality:
- multilingual
pretty_name: 'MInDS-14'
size_categories:
- 10K<n<100K
task_categories:
- automatic-speech-recognition
- speech-processing
task_ids:
- speech-recognition
- keyword-spotting
---
# MInDS-14
## Dataset Description
- **Fine-Tuning script:** [pytorch/audio-classification](https://github.com/huggingface/transformers/tree/main/examples/pytorch/audio-classification)
- **Paper:** [Multilingual and Cross-Lingual Intent Detection from Spoken Data](https://arxiv.org/abs/2104.08524)
- **Total amount of disk used:** ca. 500 MB
MINDS-14 is training and evaluation resource for intent detection task with spoken data. It covers 14
intents extracted from a commercial system in the e-banking domain, associated with spoken examples in 14 diverse language varieties.
## Example
MInDS-14 can be downloaded and used as follows:
```py
from datasets import load_dataset
minds_14 = load_dataset("PolyAI/minds14", "fr-FR") # for French
# to download all data for multi-lingual fine-tuning uncomment following line
# minds_14 = load_dataset("PolyAI/all", "all")
# see structure
print(minds_14)
# load audio sample on the fly
audio_input = minds_14["train"][0]["audio"] # first decoded audio sample
intent_class = minds_14["train"][0]["intent_class"] # first transcription
intent = minds_14["train"].features["intent_class"].names[intent_class]
# use audio_input and language_class to fine-tune your model for audio classification
```
## Dataset Structure
We show detailed information the example configurations `fr-FR` of the dataset.
All other configurations have the same structure.
### Data Instances
**fr-FR**
- Size of downloaded dataset files: 471 MB
- Size of the generated dataset: 300 KB
- Total amount of disk used: 471 MB
An example of a datainstance of the config `fr-FR` looks as follows:
```
{
"path": "/home/patrick/.cache/huggingface/datasets/downloads/extracted/3ebe2265b2f102203be5e64fa8e533e0c6742e72268772c8ac1834c5a1a921e3/fr-FR~ADDRESS/response_4.wav",
"audio": {
"path": "/home/patrick/.cache/huggingface/datasets/downloads/extracted/3ebe2265b2f102203be5e64fa8e533e0c6742e72268772c8ac1834c5a1a921e3/fr-FR~ADDRESS/response_4.wav",
"array": array(
[0.0, 0.0, 0.0, ..., 0.0, 0.00048828, -0.00024414], dtype=float32
),
"sampling_rate": 8000,
},
"transcription": "je souhaite changer mon adresse",
"english_transcription": "I want to change my address",
"intent_class": 1,
"lang_id": 6,
}
```
### Data Fields
The data fields are the same among all splits.
- **path** (str): Path to the audio file
- **audio** (dict): Audio object including loaded audio array, sampling rate and path ot audio
- **transcription** (str): Transcription of the audio file
- **english_transcription** (str): English transcription of the audio file
- **intent_class** (int): Class id of intent
- **lang_id** (int): Id of language
### Data Splits
Every config only has the `"train"` split containing of *ca.* 600 examples.
## Dataset Creation
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
All datasets are licensed under the [Creative Commons license (CC-BY)](https://creativecommons.org/licenses/).
### Citation Information
```
@article{DBLP:journals/corr/abs-2104-08524,
author = {Daniela Gerz and
Pei{-}Hao Su and
Razvan Kusztos and
Avishek Mondal and
Michal Lis and
Eshan Singhal and
Nikola Mrksic and
Tsung{-}Hsien Wen and
Ivan Vulic},
title = {Multilingual and Cross-Lingual Intent Detection from Spoken Data},
journal = {CoRR},
volume = {abs/2104.08524},
year = {2021},
url = {https://arxiv.org/abs/2104.08524},
eprinttype = {arXiv},
eprint = {2104.08524},
timestamp = {Mon, 26 Apr 2021 17:25:10 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2104-08524.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
### Contributions
Thanks to [@patrickvonplaten](https://github.com/patrickvonplaten) for adding this dataset
| [
-0.46738192439079285,
-0.5980510711669922,
0.36973637342453003,
0.2669966518878937,
-0.12211816012859344,
-0.34158211946487427,
-0.5432057976722717,
-0.390738844871521,
0.32248517870903015,
0.4533486068248749,
-0.7834721803665161,
-0.9560457468032837,
-0.5506324172019958,
0.097286321222782... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
w95/triplets | w95 | 2023-11-08T22:55:35Z | 13 | 0 | null | [
"license:mit",
"region:us"
] | 2023-11-08T22:55:35Z | 2023-11-06T08:48:31.000Z | 2023-11-06T08:48:31 | ---
license: mit
---
The largest query length is: <b>1815</b><br>
The average query length is: <b>61.19101432132145</b>
-------
The largest pos length is: <b>1312</b><br>
The average pos length is: <b>152.11767179632923</b>
-------
The largest neg length is: <b>1669</b><br>
The average neg length is: <b>224.2615171813766</b> | [
-0.6914466023445129,
-0.9441896677017212,
0.6302585005760193,
0.5590343475341797,
-0.6139355301856995,
-0.4619767367839813,
-0.05101466923952103,
-0.5775731801986694,
0.7482057809829712,
0.20438511669635773,
0.04179041460156441,
-0.9282395243644714,
-0.899965226650238,
0.32983464002609253,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
StrangeCroissant/fantasy_dataset | StrangeCroissant | 2023-11-06T14:05:20Z | 13 | 0 | null | [
"task_categories:text-generation",
"task_categories:question-answering",
"size_categories:10K<n<100K",
"language:en",
"books",
"fantasy",
"scifi",
"text",
"region:us"
] | 2023-11-06T14:05:20Z | 2023-11-06T09:35:30.000Z | 2023-11-06T09:35:30 | ---
task_categories:
- text-generation
- question-answering
language:
- en
tags:
- books
- fantasy
- scifi
- text
size_categories:
- 10K<n<100K
---
# Fantasy/Sci-fi Dataset
This dataset contains fantasy and scifi books in plain text format. Each line of the dataset represents each sentence of the concated corpus for the following books:
1. 01 Horselords.txt
2. 01 The Second Generation.txt 02 Tantras.txt
3. R.A. Salvatore - The Icewind Dale Trilogy - 2 - Streams of Silver.txt
4. RA SalvatoreThe Legacy of The Drow - 2 - Starless Night.txt
5. R.A.Salvatore - Icewind Dale Trilogy 1 - The Crystal Shard.txt
6. Star Wars - [Thrawn Trilogy 02] - Dark Force Rising (by Timothy Zahn).txt
7. Robert Jordan - The Wheel of Time 01 - Eye of the world.txt
8. 03 Crusade.txt
9. Salvatore, RA - Cleric Quintet 5 -The Chaos Curse.txt
10. 03 Waterdeep.txt Clarke Arthur C - 3001 The Final Odissey.txt
11. Dragonlance Preludes 2 vol 2 - Flint the King.txt
12. 03 Dragons of Spring Dawning.txt
13. Lloyd Alexander - [Chronicles Of Prydain 4] Taran Wanderer.txt
14. 01 Dragons of Autumn Twilight.txt
15. 03 The Two Swords.txt
16. Robert Jordan - 12 - The Gathering Storm - Chapter One.txt
17. 02 War Of The Twins.txt
18. 01 - The Fellowship Of The Ring.txt
19. 02 The Lone Drow.txt
20. 01 The Thousand Orcs.txt Auel, Jean - Earth's Children
21. 03 - The Mammoth Hunters.txt 01 Shadowdale.txt Salvatore, RA - Cleric Quintet 3 - Night Masks.txt
22. Robert Jordan - The Strike at Shayol Ghul.txt
23. Salvatore, R.A. - Paths of Darkness 1 - The Silent Blade.txt
24. Clancy Tom - Patriot Games.txt
25. Lloyd Alexander - [Chronicles Of Prydain 1] Book of Three.txt
26. Lloyd Alexander - [Chronicles Of Prydain 2] Black Cauldron.txt
27. Salvatore, R.A. - Paths of Darkness 3 - Servant of the Shard.txt
28. 02 Crown of Fire.txt
29. 04 Prince of Lies.txt
30. Salvatore, R.A. - Paths of Darkness 2 - The Spine of the World.txt
31. Robert Jordan - The Wheel of Time 11 - Knife of Dreams.txt
32. Lloyd Alexander - [Chronicles Of Prydain 3] Castle Of Llyr.txt R.A. Salvatore - The Dark Elf Trilogy.txt
33. 02 Dragonwall.txt Frank Herbert - Dune.txt
34. 02 - The Two Towers.txt
35. Salvatore, RA - Cleric Quintet 4 - The Fallen Fortress.txt
36. Robert Jordan - The Wheel of Time 04 - The Shadow Rising.txt
37. Robert Jordan - The Wheel of Time 10 - Crossroads of Twilight.txt
38. Harry Potter 2 - Chamber of Secrets.txt
39. Auel, Jean - Earth's Children 01 - The Clan of the Cave Bear.txt
40. Harry Potter 6 - The Half Blood Prince.txt
41. Robert Jordan - The Wheel of Time 03 - The Dragon Reborn.txt
42. R.A. Salvatore - The Legacy of the Drow 1 - Legacy.txt
43. 01 Spellfire.txt Frank Herbert - Children of Dune.txt
44. 01 Time Of The Twins.txt
45. R.A. Salvatore - The Legacy of the Drow III - Siege of Darkness.txt
46. Robert Jordan - The Wheel of Time 08 - The Path of Daggers.txt
47. R.A. Salvatore - The Icewind Dale Trilogy - 3 - The Halfling's Gem.txt
48. Auel, Jean - Earth's Children 05 - The Shelters Of Stone.txt
49. Harry Potter 7 - Deathly Hollows.txt
50. Robert Jordan - The Wheel of Time 07 - A Crown of Swords.txt
51. Harry Potter 1 - Sorcerer's Stone.txt
52. 05 Crucible - The Trial Of Cyric The Mad.txt Star Wars - [Thrawn Trilogy 01] - Heir to the Empire (by Timothy Zahn).txt
53. Robert Jordan - The Wheel of Time 05 - The Fires of Heaven.txt Robert Jordan - The Wheel of Time Compendium.txt
| [
-0.5150745511054993,
-0.06206458806991577,
0.35628172755241394,
-0.03251861035823822,
-0.13935226202011108,
0.26400113105773926,
0.39417365193367004,
-0.42415010929107666,
0.5036250948905945,
0.7101942300796509,
-0.9617018699645996,
-0.7243812680244446,
-0.41255423426628113,
0.497518599033... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
liyucheng/ceval_all_dev | liyucheng | 2023-11-06T23:15:33Z | 13 | 0 | null | [
"region:us"
] | 2023-11-06T23:15:33Z | 2023-11-06T23:15:23.000Z | 2023-11-06T23:15:23 | ---
dataset_info:
features:
- name: id
dtype: int32
- name: question
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: answer
dtype: string
- name: explanation
dtype: string
splits:
- name: train
num_bytes: 167293
num_examples: 260
- name: dev
num_bytes: 167293
num_examples: 260
download_size: 238168
dataset_size: 334586
---
# Dataset Card for "ceval_all_dev"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6178083419799805,
-0.36203667521476746,
0.3280603587627411,
0.27093634009361267,
-0.19680289924144745,
0.08406055718660355,
0.0890708938241005,
-0.07175976783037186,
0.9943289756774902,
0.5494040846824646,
-0.6745662093162537,
-1.0352567434310913,
-0.5274444222450256,
-0.162995919585227... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Mariaaaaa/SST2_with_COT | Mariaaaaa | 2023-11-07T00:17:45Z | 13 | 0 | null | [
"region:us"
] | 2023-11-07T00:17:45Z | 2023-11-07T00:08:30.000Z | 2023-11-07T00:08:30 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
kalyan003/Evidence_QA | kalyan003 | 2023-11-07T03:31:05Z | 13 | 0 | null | [
"license:unknown",
"region:us"
] | 2023-11-07T03:31:05Z | 2023-11-07T03:08:14.000Z | 2023-11-07T03:08:14 | ---
license: unknown
---
| [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
BrighterIndia/rahul-gandhi-captions | BrighterIndia | 2023-11-12T20:55:26Z | 13 | 0 | null | [
"license:apache-2.0",
"region:us"
] | 2023-11-12T20:55:26Z | 2023-11-07T18:05:57.000Z | 2023-11-07T18:05:57 | ---
license: apache-2.0
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 4490186.0
num_examples: 116
download_size: 4452572
dataset_size: 4490186.0
---
This dataset contains images of rahul gandhi with the token "rgai" to be used for text-to-image finetuning | [
-0.1565788835287094,
-0.702656626701355,
-0.4077344536781311,
0.3045893907546997,
-0.6560506224632263,
0.1208198070526123,
-0.1835363656282425,
-0.04191933944821358,
0.0336071252822876,
0.5445041060447693,
-0.9999425411224365,
-0.44132593274116516,
-0.8356002569198608,
0.3184071183204651,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
bys2058/sd_training | bys2058 | 2023-11-10T02:12:27Z | 13 | 0 | null | [
"region:us"
] | 2023-11-10T02:12:27Z | 2023-11-07T18:57:40.000Z | 2023-11-07T18:57:40 | ---
dataset_info:
features:
- name: image
dtype: image
- name: hair_mask
dtype: image
- name: result_image
dtype: image
- name: image_caption
dtype: string
splits:
- name: train
num_bytes: 54725442754.016
num_examples: 31704
download_size: 53307305775
dataset_size: 54725442754.016
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "sd_training"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6577710509300232,
-0.020466463640332222,
0.2631658911705017,
0.25964972376823425,
-0.14954283833503723,
0.033261872828006744,
0.3281875550746918,
-0.05407768860459328,
0.8461681604385376,
0.3523561358451843,
-1.0182158946990967,
-0.6701138615608215,
-0.6389835476875305,
-0.3510235548019... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
determined-ai/consumer_complaints_short | determined-ai | 2023-11-08T18:52:16Z | 13 | 0 | null | [
"region:us"
] | 2023-11-08T18:52:16Z | 2023-11-08T18:52:13.000Z | 2023-11-08T18:52:13 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: Issue
dtype: string
- name: Consumer Complaint
dtype: string
splits:
- name: train
num_bytes: 3965527
num_examples: 22610
- name: test
num_bytes: 1342624
num_examples: 7631
download_size: 2481164
dataset_size: 5308151
---
# Dataset Card for "consumer_complaints_short"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5277925133705139,
-0.23571786284446716,
0.271383672952652,
0.26022693514823914,
-0.3093251883983612,
-0.10648439824581146,
0.1139301210641861,
-0.3600495755672455,
0.9174709916114807,
0.5334449410438538,
-0.7181892395019531,
-0.8081619739532471,
-0.366571843624115,
-0.219908207654953,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
leoleoasd/learningq_ted_ed | leoleoasd | 2023-11-09T02:50:35Z | 13 | 0 | null | [
"license:unknown",
"region:us"
] | 2023-11-09T02:50:35Z | 2023-11-09T02:49:06.000Z | 2023-11-09T02:49:06 | ---
license: unknown
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: context
dtype: string
- name: questionsrc
dtype: string
- name: question
dtype: string
splits:
- name: train
num_bytes: 29551239.588505447
num_examples: 6172
- name: validation
num_bytes: 3284535.0547172287
num_examples: 686
- name: test
num_bytes: 3653207.3567773257
num_examples: 763
download_size: 20939717
dataset_size: 36488982.0
---
| [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
mnhnam/portal-support-pure | mnhnam | 2023-11-13T03:28:11Z | 13 | 0 | null | [
"license:mit",
"region:us"
] | 2023-11-13T03:28:11Z | 2023-11-10T04:53:55.000Z | 2023-11-10T04:53:55 | ---
license: mit
---
| [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
nguyenthanhdo/patent_v3_merged | nguyenthanhdo | 2023-11-10T08:02:31Z | 13 | 0 | null | [
"region:us"
] | 2023-11-10T08:02:31Z | 2023-11-10T08:02:15.000Z | 2023-11-10T08:02:15 | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
- name: lang
dtype: string
- name: source
dtype: string
splits:
- name: train
num_bytes: 120390258.7775974
num_examples: 100488
download_size: 79766258
dataset_size: 120390258.7775974
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "patent_v3_merged"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.49068698287010193,
-0.06408481299877167,
0.27823561429977417,
0.2833035886287689,
-0.32341352105140686,
0.049035511910915375,
0.6015511155128479,
-0.49627920985221863,
0.703659176826477,
0.739016056060791,
-0.5196558833122253,
-0.6675002574920654,
-0.5620571374893188,
-0.464993625879287... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
jungypark/SQuAD_V2_Computational_complexity_theory | jungypark | 2023-11-10T11:52:48Z | 13 | 0 | null | [
"region:us"
] | 2023-11-10T11:52:48Z | 2023-11-10T11:34:38.000Z | 2023-11-10T11:34:38 | Entry not found | [
-0.3227647542953491,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965083122253,
0.7915717959403992,
0.07618629932403564,
0.7746022343635559,
0.2563222348690033,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
presencesw/complexquestion_2WIKIMQA_1000 | presencesw | 2023-11-12T14:30:28Z | 13 | 0 | null | [
"region:us"
] | 2023-11-12T14:30:28Z | 2023-11-12T03:42:53.000Z | 2023-11-12T03:42:53 | ---
dataset_info:
features:
- name: entities
sequence: 'null'
- name: triplets
sequence: 'null'
- name: answer
dtype: string
- name: complex_question
dtype: string
- name: system_prompt
dtype: string
- name: user_prompt
dtype: string
- name: gpt_answer
dtype: string
splits:
- name: train
num_bytes: 5897070
num_examples: 1000
download_size: 629114
dataset_size: 5897070
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "complexquestion_2WIKIMQA_1000"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6356295347213745,
-0.3328280746936798,
0.23908014595508575,
0.5510063767433167,
-0.027960844337940216,
0.11163505166769028,
0.26525208353996277,
-0.2966914474964142,
0.712016761302948,
0.42174166440963745,
-0.9649547934532166,
-0.5752168297767639,
-0.6057481169700623,
-0.268554806709289... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
automated-research-group/llama2_7b_chat-boolq | automated-research-group | 2023-11-12T08:17:12Z | 13 | 0 | null | [
"region:us"
] | 2023-11-12T08:17:12Z | 2023-11-12T08:17:11.000Z | 2023-11-12T08:17:11 | ---
dataset_info:
features:
- name: id
dtype: string
- name: request
dtype: string
- name: response
dtype: string
- name: input_perplexity
dtype: float64
- name: input_likelihood
dtype: float64
- name: output_perplexity
dtype: float64
- name: output_likelihood
dtype: float64
splits:
- name: validation
num_bytes: 2716450
num_examples: 3270
download_size: 1480464
dataset_size: 2716450
configs:
- config_name: default
data_files:
- split: validation
path: data/validation-*
---
# Dataset Card for "llama2_7b_chat-boolq"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.3245592713356018,
-0.39041054248809814,
0.09482049196958542,
0.566257655620575,
-0.4120083153247833,
0.29573240876197815,
0.3913210332393646,
-0.33234432339668274,
0.7702447772026062,
0.6413677930831909,
-0.7352908849716187,
-0.8318066000938416,
-0.6122149229049683,
-0.21959643065929413... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Trelis/hh-rlhf-dpo | Trelis | 2023-11-13T14:03:24Z | 13 | 1 | null | [
"license:other",
"human-feedback",
"DPO",
"rlhf",
"Llama 2",
"arxiv:2204.05862",
"region:us"
] | 2023-11-13T14:03:24Z | 2023-11-13T10:54:53.000Z | 2023-11-13T10:54:53 | ---
tags:
- human-feedback
- DPO
- rlhf
- Llama 2
extra_gated_prompt: >-
Access to this repo requires the purchase of a license (see link on model card
below)
extra_gated_fields:
Name: text
Affiliation: text
Email: text
I have purchased access (access will be granted once your payment clears): checkbox
I agree to the terms of the license described on the dataset card: checkbox
license: other
---
# DPO formatted Helpful and Harmless RLHF Dataset
This dataset is built from [Anthropic's hh-rlhf dataset](https://huggingface.co/datasets/Anthropic/hh-rlhf).
It is modified as follows:
- The prompt formatting is switched to the Llama 2 format with [INST] and [/INST]
- The data is split into prompt, chosen and rejected rows, as required by [HuggingFace's DPO trainer](https://huggingface.co/docs/trl/dpo_trainer).
Purchase access to this dataset [here](https://buy.stripe.com/dR614QcWh95VdZ68xb). Purchase entitles the user to make use of the dataset for training large language models.
---
- The original dataset card follows below:
# Dataset Card for HH-RLHF
## Dataset Summary
This repository provides access to two different kinds of data:
1. Human preference data about helpfulness and harmlessness from [Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback](https://arxiv.org/abs/2204.05862). These data are meant to train preference (or reward) models for subsequent RLHF training. These data are *not* meant for supervised training of dialogue agents. Training dialogue agents on these data is likely to lead to harmful models and this shold be avoided.
2. Human-generated and annotated red teaming dialogues from [Red Teaming Language Models to Reduce Harms: Methods, Scaling Behaviors, and Lessons Learned](https://www.anthropic.com/red_teaming.pdf). These data are meant to understand how crowdworkers red team models and what types of red team attacks are succesful or not. The data are *not* meant for fine-tuning or preference modeling (use the data above for preference modeling). These data are entire transcripts of conversations that are derived from the harmlessness preference modeling data described above, where only the chosen response is incorporated into the overall transcript. Furthermore, the transcripts are annotated with human and automated measurements of how harmful the overall dialogues are.
**Disclaimer**: The data (especially the harmlessness preference data and the red team data) contain content that may be offensive or upsetting. Topics include, but are not limited to, discriminatory language and discussions of abuse, violence, self-harm, exploitation, and other potentially upsetting subject matter. Please only engage with the data in accordance with your own personal risk tolerance. The data are intended for research purposes, especially research that can make models *less* harmful. The views expressed in the data do not reflect the views of Anthropic or any of its employees. As mentioned above, these data are *not* intended for training dialogue agents as this will likely lead to harmful model behavior.
Each of these datasets are described further below.
## Human preference data about helpfulness and harmlessness (PM Data)
The data are described in the paper: [Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback](https://arxiv.org/abs/2204.05862). If you find the data useful, please cite the paper. The data format is very simple -- each line of the jsonl files contains a pair of texts, one "chosen" and one "rejected".
For **helpfulness**, the data are grouped into train/test splits in three tranches: from our base models (context-distilled 52B language models), via rejection sampling (mostly with best-of-16 sampling) against an early preference model, and a dataset sampled during our iterated "online" process.
For **harmlessness**, the data are only collected for our base models, but otherwise formatted in the same way.
Details about the data collection process and crowdworker population can be found in the paper, specifically in section 2 and appendix D.
## Red teaming data (not PM Data)
The data are described in the paper: [Red Teaming Language Models to Reduce Harms: Methods, Scaling Behaviors, and Lessons Learned](https://www.anthropic.com/red_teaming.pdf). If you find the data useful, please cite the paper. At a high level these data can be examined to understand what succesful red team attempts look like.
Details about the data and data collection procedures can be found in the Datasheet in the appendix of the paper.
Each line of the jsonl file contains a dictionary with the following fields:
- `transcript` a text transcript of a conversation between a human adversary (red team member) and an AI assistant
- `min_harmlessness_score_transcript` a real value score of the harmlessness of the AI assistant (lower is more harmful) as obtained from a preference model
- `num_params` number of parameters in the language model powering the AI assistant
- `model_type` type of model powering the AI assistant
- `rating` the red team member's rating of how successful they were at breaking the AI assistant (Likert scale, higher is more successful)
- `task_description` a short text description written by the red team member about how they tried to red team the AI assistant
- `task_description_harmlessness_score` a real value score of the harmlessness of the task description (lower is more harmful) as obtained from a preference model
- `red_team_member_id` an arbitrary identifier of the red team member. one red team member can generate multiple red team attacks
- `is_upworker` a binary indicator that is true if the red team member was from the crowd platform Upwork or false if they were from MTurk
- `tags` a list of up to 6 tags per transcript. tags are short descriptions of the red team attempts generated by crowdworkers who reviewed red team data post-hoc. tags were only provided for a random sample of 1000 red team attempts for two of four model types.
## Usage
Each of the above datasets is located in a separate sub-directory. To load an individual subset, use the `data_dir` argument of the `load_dataset()` function as follows:
```python
from datasets import load_dataset
# Load all helpfulness/harmless subsets (share the same schema)
dataset = load_dataset("Anthropic/hh-rlhf")
# Load one of the harmless subsets
dataset = load_dataset("Anthropic/hh-rlhf", data_dir="harmless-base")
# Load the red teaming subset
dataset = load_dataset("Anthropic/hh-rlhf", data_dir="red-team-attempts")
```
## Contact
The original authors host this dataset on GitHub here: https://github.com/anthropics/hh-rlhf
You can submit inquiries to: redteam@anthropic.com | [
-0.23129944503307343,
-0.6560770273208618,
0.14307811856269836,
0.10083562880754471,
-0.08663669973611832,
-0.03357836231589317,
-0.09570428729057312,
-0.6243447065353394,
0.27566131949424744,
0.6029136180877686,
-0.6447476148605347,
-0.47017186880111694,
-0.4395705759525299,
0.40106824040... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
MinhMai/mobile_giving_foundation | MinhMai | 2023-11-16T08:31:16Z | 13 | 0 | null | [
"language:en",
"doi:10.57967/hf/1352",
"region:us"
] | 2023-11-16T08:31:16Z | 2023-11-13T11:19:58.000Z | 2023-11-13T11:19:58 | ---
configs:
- config_name: train
data_files: "train.jsonl"
default: true
language:
- en
pretty_name: mobile_giving_foundation
--- | [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
AIMClab/ChinaOpen | AIMClab | 2023-11-15T15:50:34Z | 13 | 0 | null | [
"size_categories:1K<n<10K",
"language:zh",
"license:cc-by-nc-sa-4.0",
"region:us"
] | 2023-11-15T15:50:34Z | 2023-11-14T02:35:42.000Z | 2023-11-14T02:35:42 | ---
license: cc-by-nc-sa-4.0
language:
- zh
size_categories:
- 1K<n<10K
---
## Dataset Description
- **Homepage:** [ChinaOpen homepage](https://ruc-aimc-lab.github.io/ChinaOpen/)
- **Paper:** [ChinaOpen: A Dataset for Open-World Multimodal Learning](https://doi.org/10.1145/3581783.3612156)
- **Point of Contact:** [Aozhu Chen](caz@ruc.edu.cn)
### Dataset Summary
ChinaOpen-1k is a dataset sourced from Bilibili, a popular Chinese video-sharing website. It is a manually annotated test set of videos, including manually checked user titles/tags, manually written captions, and manual labels describing the visual objects/actions/scenes shown in the content.
### Languages
Chinese and English
## Dataset Structure
All the files are put in a zip package.
```bash
├── ChinaOpen-1k
├── video01.mp4
├── video02.mp4
├── video03.mp4
├── [...]
└── ChinaOpen-1k-annotations.json
```
### Data Instances
Please refer to https://ruc-aimc-lab.github.io/ChinaOpen/#examples | [
-0.18673817813396454,
-0.4773845970630646,
-0.22865194082260132,
0.39806434512138367,
-0.29659706354141235,
-0.4436042308807373,
-0.5763664245605469,
-0.08667857944965363,
-0.15492412447929382,
0.22730332612991333,
-0.745444655418396,
-0.658949613571167,
-0.28407397866249084,
-0.0873731598... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
sreejith8100/test | sreejith8100 | 2023-11-14T07:27:23Z | 13 | 0 | null | [
"region:us"
] | 2023-11-14T07:27:23Z | 2023-11-14T03:05:23.000Z | 2023-11-14T03:05:23 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': marriage
'1': other
- name: ground_truth
dtype: string
splits:
- name: train
num_bytes: 44443420.0
num_examples: 64
- name: test
num_bytes: 8086095.0
num_examples: 10
download_size: 52447696
dataset_size: 52529515.0
---
# Dataset Card for "test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6750997304916382,
-0.4192483425140381,
0.08099692314863205,
0.19183449447155,
-0.1334189772605896,
0.008788415230810642,
0.24120880663394928,
-0.13406147062778473,
0.7381711006164551,
0.3334384560585022,
-0.8203108310699463,
-0.6550991535186768,
-0.47400179505348206,
-0.1874891966581344... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Eitanli/rewrite_instructions_bu | Eitanli | 2023-11-28T09:08:35Z | 13 | 0 | null | [
"region:us"
] | 2023-11-28T09:08:35Z | 2023-11-14T12:30:51.000Z | 2023-11-14T12:30:51 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: recipe
dtype: string
- name: instructions
dtype: string
splits:
- name: train
num_bytes: 119957890
num_examples: 55701
download_size: 60783007
dataset_size: 119957890
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "rewrite_instructions_bu"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.17915306985378265,
-0.6524439454078674,
0.28280332684516907,
0.41043150424957275,
-0.2791108191013336,
-0.063298299908638,
0.21808888018131256,
0.1691163182258606,
0.7689725160598755,
0.9014145135879517,
-1.0950169563293457,
-0.8211121559143066,
-0.23253808915615082,
-0.0929358601570129... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
nateraw/rap-lyrics-v2 | nateraw | 2023-11-15T03:32:19Z | 13 | 0 | null | [
"region:us"
] | 2023-11-15T03:32:19Z | 2023-11-15T03:32:13.000Z | 2023-11-15T03:32:13 | ---
dataset_info:
features:
- name: text
dtype: string
- name: num_tokens
dtype: int64
- name: completion
dtype: string
splits:
- name: train
num_bytes: 4236119
num_examples: 7319
download_size: 2186104
dataset_size: 4236119
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "rap-lyrics-v2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.4166371524333954,
-0.025647250935435295,
0.07620253413915634,
0.5149229764938354,
-0.19714896380901337,
0.17750513553619385,
0.27552086114883423,
-0.3293948471546173,
0.8681622743606567,
0.46909475326538086,
-0.9023513793945312,
-0.7604362368583679,
-0.8317119479179382,
-0.3003398776054... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
JoaoJunior/python_java_dataset_APR | JoaoJunior | 2023-11-15T07:30:33Z | 13 | 0 | null | [
"region:us"
] | 2023-11-15T07:30:33Z | 2023-11-15T07:25:44.000Z | 2023-11-15T07:25:44 | ---
dataset_info:
features:
- name: rem
dtype: string
- name: add
dtype: string
- name: context
dtype: string
- name: meta
dtype: string
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
splits:
- name: train
num_bytes: 14814811902
num_examples: 2728295
- name: test
num_bytes: 3704062611
num_examples: 681983
download_size: 5172322839
dataset_size: 18518874513
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
# Dataset Card for "python_java_dataset_APR"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.3840039372444153,
-0.34940478205680847,
0.07292716205120087,
0.5099987387657166,
-0.1237611174583435,
-0.35410425066947937,
0.2726990878582001,
-0.05134572833776474,
0.5906181931495667,
0.36569517850875854,
-0.698298990726471,
-0.7194849848747253,
-0.6027112603187561,
-0.064222261309623... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
joey234/tweet_eval_affix_neg | joey234 | 2023-11-17T01:30:00Z | 13 | 0 | null | [
"region:us"
] | 2023-11-17T01:30:00Z | 2023-11-17T01:28:17.000Z | 2023-11-17T01:28:17 | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': negative
'1': neutral
'2': positive
- name: words_with_affixes
sequence: string
splits:
- name: test
num_bytes: 56170
num_examples: 405
download_size: 0
dataset_size: 56170
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
# Dataset Card for "tweet_eval_affix_neg"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.33653947710990906,
-0.7057356834411621,
0.0445844940841198,
0.3511176109313965,
-0.22328442335128784,
0.2742362916469574,
0.11240024864673615,
0.11713853478431702,
0.8999149799346924,
0.298759400844574,
-0.8153791427612305,
-0.9246956706047058,
-0.7789125442504883,
-0.22706358134746552,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
cherry0324/miniimagenet_caption | cherry0324 | 2023-11-17T15:51:13Z | 13 | 0 | null | [
"region:us"
] | 2023-11-17T15:51:13Z | 2023-11-17T05:27:24.000Z | 2023-11-17T05:27:24 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 2506731980.0
num_examples: 50000
download_size: 1288083278
dataset_size: 2506731980.0
---
# Dataset Card for "miniimagenet_caption"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.7123634815216064,
-0.06471778452396393,
0.18025396764278412,
0.20827558636665344,
-0.4660468101501465,
0.019299305975437164,
0.30524155497550964,
-0.01336000021547079,
0.9108831286430359,
0.39618873596191406,
-1.0163660049438477,
-0.4422319233417511,
-0.624085545539856,
-0.1385276168584... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
youlive789/sales | youlive789 | 2023-11-19T05:38:27Z | 13 | 0 | null | [
"region:us"
] | 2023-11-19T05:38:27Z | 2023-11-18T02:35:34.000Z | 2023-11-18T02:35:34 | ---
dataset_info:
features:
- name: original_image
dtype: image
- name: edited_image
dtype: image
- name: edit_promt
dtype: string
splits:
- name: train
num_bytes: 40172999.0
num_examples: 50
download_size: 40172602
dataset_size: 40172999.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
| [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
buddhist-nlp/sentence-alignment-tib-eng | buddhist-nlp | 2023-11-18T19:25:01Z | 13 | 0 | null | [
"region:us"
] | 2023-11-18T19:25:01Z | 2023-11-18T19:24:28.000Z | 2023-11-18T19:24:28 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 739829565
num_examples: 427999
- name: validation
num_bytes: 173302
num_examples: 100
- name: test
num_bytes: 172985
num_examples: 100
download_size: 461575214
dataset_size: 740175852
---
# Dataset Card for "sentence-alignment-tib-eng"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.34426984190940857,
-0.5242792963981628,
0.29320093989372253,
0.4039091169834137,
0.06511099636554718,
0.1079295352101326,
-0.01089292112737894,
-0.09812135994434357,
0.8687194585800171,
0.31113192439079285,
-0.7591242790222168,
-0.7496129870414734,
-0.552337646484375,
-0.064403876662254... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
anetakahle/sexchat_cs1 | anetakahle | 2023-11-19T18:46:47Z | 13 | 0 | null | [
"license:mit",
"region:us"
] | 2023-11-19T18:46:47Z | 2023-11-19T18:42:05.000Z | 2023-11-19T18:42:05 | ---
license: mit
---
| [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
dthoek/DocLayNet-large_paragraphs_encoded_ml512 | dthoek | 2023-11-21T10:29:34Z | 13 | 0 | null | [
"region:us"
] | 2023-11-21T10:29:34Z | 2023-11-19T23:26:10.000Z | 2023-11-19T23:26:10 | ---
dataset_info:
features:
- name: page_hash
dtype: string
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: normalized_bboxes
sequence:
sequence: int64
- name: labels
sequence: int64
splits:
- name: train
num_bytes: 3793445572
num_examples: 150701
- name: validation
num_bytes: 401719948
num_examples: 15959
- name: test
num_bytes: 275482368
num_examples: 10944
download_size: 110722707
dataset_size: 4470647888
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
| [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Divyanshu04/low-light-data | Divyanshu04 | 2023-11-23T05:31:39Z | 13 | 0 | null | [
"size_categories:n<1K",
"language:en",
"region:us"
] | 2023-11-23T05:31:39Z | 2023-11-20T08:38:33.000Z | 2023-11-20T08:38:33 | ---
language:
- en
size_categories:
- n<1K
--- | [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
BDARUI03/finetuned_dataset | BDARUI03 | 2023-11-20T09:21:20Z | 13 | 0 | null | [
"license:apache-2.0",
"region:us"
] | 2023-11-20T09:21:20Z | 2023-11-20T09:20:39.000Z | 2023-11-20T09:20:39 | ---
license: apache-2.0
---
| [
-0.12853367626667023,
-0.18616794049739838,
0.6529126763343811,
0.4943627417087555,
-0.19319313764572144,
0.23607443273067474,
0.36071979999542236,
0.05056338757276535,
0.5793654322624207,
0.7400138974189758,
-0.6508103013038635,
-0.23783987760543823,
-0.710224986076355,
-0.047825977206230... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
cs182project/182-final-project | cs182project | 2023-11-22T06:18:30Z | 13 | 0 | null | [
"region:us"
] | 2023-11-22T06:18:30Z | 2023-11-20T11:01:33.000Z | 2023-11-20T11:01:33 | Entry not found | [
-0.3227649927139282,
-0.225684255361557,
0.862226128578186,
0.43461498618125916,
-0.5282987952232361,
0.7012963891029358,
0.7915717363357544,
0.07618629932403564,
0.7746025919914246,
0.2563219666481018,
-0.7852816581726074,
-0.2257382869720459,
-0.9104480743408203,
0.5715669393539429,
-0... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Odunope/mini-platypus | Odunope | 2023-11-20T13:11:38Z | 13 | 0 | null | [
"region:us"
] | 2023-11-20T13:11:38Z | 2023-11-20T13:07:34.000Z | 2023-11-20T13:07:34 | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 4205526
num_examples: 1000
download_size: 2246282
dataset_size: 4205526
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
| [
-0.12853367626667023,
-0.18616794049739838,
0.6529126763343811,
0.4943627417087555,
-0.19319313764572144,
0.23607443273067474,
0.36071979999542236,
0.05056338757276535,
0.5793654322624207,
0.7400138974189758,
-0.6508103013038635,
-0.23783987760543823,
-0.710224986076355,
-0.047825977206230... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
mengmengmmm/csn_go_trainuse_slice3 | mengmengmmm | 2023-11-20T15:25:21Z | 13 | 0 | null | [
"region:us"
] | 2023-11-20T15:25:21Z | 2023-11-20T15:25:00.000Z | 2023-11-20T15:25:00 | Entry not found | [
-0.3227649927139282,
-0.225684255361557,
0.862226128578186,
0.43461498618125916,
-0.5282987952232361,
0.7012963891029358,
0.7915717363357544,
0.07618629932403564,
0.7746025919914246,
0.2563219666481018,
-0.7852816581726074,
-0.2257382869720459,
-0.9104480743408203,
0.5715669393539429,
-0... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
lhallee/EC_fold | lhallee | 2023-11-21T03:35:33Z | 13 | 0 | null | [
"region:us"
] | 2023-11-21T03:35:33Z | 2023-11-21T03:35:27.000Z | 2023-11-21T03:35:27 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: valid
path: data/valid-*
- split: test
path: data/test-*
dataset_info:
features:
- name: seqs
dtype: string
- name: labels
dtype: string
splits:
- name: train
num_bytes: 30167609
num_examples: 13089
- name: valid
num_bytes: 3394049
num_examples: 1465
- name: test
num_bytes: 3655560
num_examples: 1604
download_size: 9383528
dataset_size: 37217218
---
# Dataset Card for "EC_fold"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.7770465016365051,
-0.25020867586135864,
0.27096349000930786,
0.1994348168373108,
-0.07980523258447647,
-0.1427958458662033,
0.3047883212566376,
-0.18059322237968445,
0.877004861831665,
0.6858542561531067,
-1.175502061843872,
-0.9815318584442139,
-0.4409034848213196,
-0.19315987825393677... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
BhargaviAcharya/Text-summ-llama2-2 | BhargaviAcharya | 2023-11-21T21:00:38Z | 13 | 0 | null | [
"region:us"
] | 2023-11-21T21:00:38Z | 2023-11-21T21:00:25.000Z | 2023-11-21T21:00:25 | ---
dataset_info:
features:
- name: Text
dtype: string
- name: Summary
dtype: string
- name: __index_level_0__
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 74649139
num_examples: 4149
download_size: 43829609
dataset_size: 74649139
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
| [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
UsQuake/ampk_qa | UsQuake | 2023-11-23T03:00:52Z | 13 | 1 | null | [
"license:apache-2.0",
"region:us"
] | 2023-11-23T03:00:52Z | 2023-11-22T06:40:05.000Z | 2023-11-22T06:40:05 | ---
license: apache-2.0
dataset_info:
features:
- name: answer
dtype: string
- name: context
dtype: string
- name: question
dtype: string
splits:
- name: train
num_bytes: 185469
num_examples: 48
download_size: 107053
dataset_size: 185469
---
| [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
showchen/Mayuri | showchen | 2023-11-22T07:30:35Z | 13 | 0 | null | [
"region:us"
] | 2023-11-22T07:30:35Z | 2023-11-22T07:29:58.000Z | 2023-11-22T07:29:58 | Entry not found | [
-0.3227647542953491,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965083122253,
0.7915717959403992,
0.07618629932403564,
0.7746022343635559,
0.2563222348690033,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
jhan21/amazon-food-reviews-dataset | jhan21 | 2023-11-25T20:44:06Z | 13 | 0 | null | [
"task_categories:text-classification",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:cc0-1.0",
"amazon",
"reviews",
"food reviews",
"business",
"reg... | 2023-11-25T20:44:06Z | 2023-11-22T20:14:11.000Z | 2023-11-22T20:14:11 | ---
annotations_creators:
- expert-generated
language:
- en
language_creators:
- expert-generated
license:
- cc0-1.0
multilinguality:
- monolingual
pretty_name: amazon-food-reviews-dataset
size_categories:
- 1K<n<10K
source_datasets:
- original
tags:
- amazon
- reviews
- food reviews
- business
task_categories:
- text-classification
task_ids: []
---
# Dataset Card for "Amazon Food Reviews"
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset consists of reviews of fine foods from amazon. The data span a period of more than 10 years, including all ~500,000 reviews up to October 2012. Reviews include product and user information, ratings, and a plain text review. It also includes reviews from all other Amazon categories.
### Supported Tasks and Leaderboards
This dataset can be used for numerous tasks like sentiment analysis, text classification, and user behavior analysis. It's particularly useful for training models to understand customer feedback and preferences.
### Languages
The reviews are primarily in English.
## Dataset Structure
### Data Instances
A typical data instance comprises a review with fields like product ID, user ID, rating, review text, helpfulness votes, and time of the review.
### Data Fields
- `ProductId`: Unique identifier for the product
- `UserId`: Unique identifier for the user
- `ProfileName`: Profile name of the user
- `HelpfulnessNumerator`: Number of users who found the review helpful
- `HelpfulnessDenominator`: Number of users who indicated whether they found the review helpful or not
- `Score`: Rating between 1 and 5
- `Time`: Timestamp of the review
- `Summary`: Brief summary of the review
- `Text`: Text of the review
### Data Splits
The dataset is not split into standard training/validation/testing sets. Users may need to create these splits as per their requirement.
## Dataset Creation
### Curation Rationale
The dataset was created to provide a large collection of textual reviews with sentiment labels, useful for tasks in sentiment analysis and natural language processing.
### Source Data
#### Initial Data Collection and Normalization
The data was collected from Amazon's food reviews section.
#### Who are the source language producers?
The source language producers are the Amazon users / customers who provided these reviews.
### Annotations
#### Annotation process
The reviews come with ratings that can be converted into sentiment labels, but no additional annotation process was described.
#### Who are the annotators?
The annotators are the Amazon users who left the reviews and ratings.
### Personal and Sensitive Information
The dataset contains user IDs and profile names which could potentially be used to identify the reviewers.
## Considerations for Using the Data
### Social Impact of Dataset
This dataset provides insights into consumer preferences and sentiment, which can be valuable for businesses and researchers. However, care should be taken to ensure that models trained on this data do not reinforce stereotypes or biases present in the reviews.
### Discussion of Biases
The dataset may contain biases inherent in the user base of Amazon, which may not be representative of the general population.
### Other Known Limitations
The dataset's scope is limited to food products and may not generalize well to other types of products or reviews.
## Additional Information
### Dataset Curators
The dataset was originally curated by the SNAP group.
### Licensing Information
The dataset is available under a CC BY-SA 4.0 license.
### Citation Information
If you publish articles based on this dataset, please cite the following paper:
J. McAuley and J. Leskovec. _From amateurs to connoisseurs: modeling the evolution of user expertise through online reviews_. WWW, 2013.
### Contributions
Thanks to [@Stanford Network Analysis Project](https://www.kaggle.com/datasets/snap/amazon-fine-food-reviews/data) for adding this dataset. | [
-0.39340877532958984,
-0.5746804475784302,
0.03597048670053482,
0.2351226806640625,
-0.1617102175951004,
-0.021343983709812164,
0.0016903117066249251,
-0.5522977709770203,
0.5444613099098206,
0.8028703927993774,
-0.6940327882766724,
-0.8589213490486145,
-0.5517141222953796,
0.3374154269695... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
bgspaditya/mal-url-treat-no-trunc | bgspaditya | 2023-11-23T02:12:40Z | 13 | 0 | null | [
"region:us"
] | 2023-11-23T02:12:40Z | 2023-11-23T02:12:13.000Z | 2023-11-23T02:12:13 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: val
path: data/val-*
- split: test
path: data/test-*
dataset_info:
features:
- name: url
dtype: string
- name: type
dtype: string
- name: type_code
dtype: int64
splits:
- name: train
num_bytes: 42427560.8
num_examples: 512900
- name: val
num_bytes: 5303403.7395390915
num_examples: 64112
- name: test
num_bytes: 5303486.460460909
num_examples: 64113
download_size: 32110906
dataset_size: 53034451.0
---
# Dataset Card for "mal-url-treat-no-trunc"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.40225541591644287,
-0.5074363946914673,
0.09231137484312057,
0.1948530524969101,
-0.28954848647117615,
0.005053057335317135,
0.3059409260749817,
-0.06142372265458107,
0.8767384886741638,
0.747900128364563,
-0.7694680094718933,
-1.0996301174163818,
-0.6603560447692871,
-0.007272090762853... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
GokhanAI/INST_DATA | GokhanAI | 2023-11-23T15:08:43Z | 13 | 0 | null | [
"region:us"
] | 2023-11-23T15:08:43Z | 2023-11-23T15:07:56.000Z | 2023-11-23T15:07:56 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: prompt
dtype: string
- name: response
dtype: string
- name: chosen
dtype: string
- name: rejected
dtype: string
splits:
- name: train
num_bytes: 107826078.69637841
num_examples: 37513
- name: test
num_bytes: 5748731.303621593
num_examples: 2000
download_size: 0
dataset_size: 113574810.0
---
# Dataset Card for "INST_DATA"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5267369151115417,
-0.21292316913604736,
0.19943593442440033,
0.2527773082256317,
-0.3022010922431946,
0.15305861830711365,
0.2871536612510681,
-0.17807096242904663,
0.7575011253356934,
0.49914443492889404,
-0.736985445022583,
-0.6966738700866699,
-0.5155423879623413,
-0.1804492175579071... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
result-kand2-sdxl-wuerst-karlo/a36c16e4 | result-kand2-sdxl-wuerst-karlo | 2023-11-24T04:21:25Z | 13 | 0 | null | [
"region:us"
] | 2023-11-24T04:21:25Z | 2023-11-24T04:21:24.000Z | 2023-11-24T04:21:24 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 170
num_examples: 10
download_size: 1325
dataset_size: 170
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "a36c16e4"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.8273065686225891,
0.000021590127289528027,
0.21605177223682404,
0.20250453054904938,
0.017096837982535362,
-0.009990440681576729,
0.3750399053096771,
-0.29356151819229126,
0.8004765510559082,
0.44289088249206543,
-1.0471415519714355,
-0.7735456228256226,
-0.4519527554512024,
-0.13198366... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
bdice/rapids-codegen | bdice | 2023-11-27T03:47:12Z | 13 | 0 | null | [
"region:us"
] | 2023-11-27T03:47:12Z | 2023-11-25T00:49:35.000Z | 2023-11-25T00:49:35 | ---
dataset_info:
features:
- name: repo_id
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 378315950
num_examples: 16827
download_size: 151107014
dataset_size: 378315950
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "rapids-codegen"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6345095038414001,
-0.005360244773328304,
0.10732922703027725,
0.40657249093055725,
-0.09085387736558914,
0.21668937802314758,
0.23832252621650696,
0.009406022727489471,
0.634914219379425,
0.460309773683548,
-0.9325939416885376,
-0.8453455567359924,
-0.7110435366630554,
-0.24170452356338... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
RyuBcn/reciclauto | RyuBcn | 2023-11-27T03:07:58Z | 13 | 0 | null | [
"task_categories:image-classification",
"size_categories:n<1K",
"language:es",
"license:mit",
"not-for-all-audiences",
"region:us"
] | 2023-11-27T03:07:58Z | 2023-11-25T01:37:09.000Z | 2023-11-25T01:37:09 | ---
license: mit
tags:
- not-for-all-audiences
task_categories:
- image-classification
language:
- es
size_categories:
- n<1K
--- | [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
rvenie/mybert | rvenie | 2023-11-25T15:18:12Z | 13 | 0 | null | [
"task_categories:table-question-answering",
"language:ru",
"license:apache-2.0",
"region:us"
] | 2023-11-25T15:18:12Z | 2023-11-25T14:44:17.000Z | 2023-11-25T14:44:17 | ---
license: apache-2.0
task_categories:
- table-question-answering
language:
- ru
--- | [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
EazyAl/Quaid-audio-only | EazyAl | 2023-11-25T19:24:33Z | 13 | 0 | null | [
"license:apache-2.0",
"region:us"
] | 2023-11-25T19:24:33Z | 2023-11-25T17:12:18.000Z | 2023-11-25T17:12:18 | ---
license: apache-2.0
---
| [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
dapooni/sorsolingo_en_bsl | dapooni | 2023-11-28T17:55:45Z | 13 | 0 | null | [
"region:us"
] | 2023-11-28T17:55:45Z | 2023-11-26T16:20:01.000Z | 2023-11-26T16:20:01 | ---
dataset_info:
features:
- name: en
dtype: string
- name: bsl
dtype: string
splits:
- name: train
num_bytes: 75385
num_examples: 1517
download_size: 53873
dataset_size: 75385
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
| [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
pegah-a/small-natural-instructions | pegah-a | 2023-11-27T17:55:57Z | 13 | 0 | null | [
"region:us"
] | 2023-11-27T17:55:57Z | 2023-11-27T17:48:33.000Z | 2023-11-27T17:48:33 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
yangwang825/sst2-remove-non-stopwords-n2 | yangwang825 | 2023-11-27T18:40:17Z | 13 | 0 | null | [
"region:us"
] | 2023-11-27T18:40:17Z | 2023-11-27T18:40:10.000Z | 2023-11-27T18:40:10 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype: int64
- name: label_text
dtype: string
splits:
- name: train
num_bytes: 884164
num_examples: 6920
- name: validation
num_bytes: 112712
num_examples: 872
- name: test
num_bytes: 208473
num_examples: 1821
download_size: 713427
dataset_size: 1205349
---
# Dataset Card for "sst2-remove-non-stopwords-n2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.3153568208217621,
-0.6188904047012329,
0.12744271755218506,
0.27492669224739075,
-0.6953715682029724,
0.272190660238266,
-0.17490030825138092,
-0.12697111070156097,
0.8711530566215515,
0.5548556447029114,
-0.7310025095939636,
-0.6198015809059143,
-0.8727024793624878,
-0.1014248132705688... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
jsfactory/mental_health_reddit_posts | jsfactory | 2021-12-11T15:20:12Z | 12 | 1 | null | [
"region:us"
] | 2021-12-11T15:20:12Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
kaushikacharya/github-issues | kaushikacharya | 2022-02-01T16:55:28Z | 12 | 0 | null | [
"region:us"
] | 2022-02-01T16:55:28Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | Entry not found | [
-0.32276487350463867,
-0.22568444907665253,
0.8622263073921204,
0.43461570143699646,
-0.5282988548278809,
0.7012969255447388,
0.7915717363357544,
0.07618642598390579,
0.7746027112007141,
0.25632190704345703,
-0.7852815389633179,
-0.22573848068714142,
-0.910447895526886,
0.5715675354003906,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
keshan/large-sinhala-asr-dataset | keshan | 2021-07-13T18:41:05Z | 12 | 1 | null | [
"region:us"
] | 2021-07-13T18:41:05Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | Entry not found | [
-0.32276487350463867,
-0.22568444907665253,
0.8622263073921204,
0.43461570143699646,
-0.5282988548278809,
0.7012969255447388,
0.7915717363357544,
0.07618642598390579,
0.7746027112007141,
0.25632190704345703,
-0.7852815389633179,
-0.22573848068714142,
-0.910447895526886,
0.5715675354003906,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
keshan/multispeaker-tts-sinhala | keshan | 2021-10-04T15:39:30Z | 12 | 0 | null | [
"region:us"
] | 2021-10-04T15:39:30Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | This data set contains multi-speaker high quality transcribed audio data for Sinhalese. The data set consists of wave files and the transcriptions of the audio files.
The data set has been manually quality checked, but there might still be errors.
This dataset was collected by Google in Sri Lanka.
See [LICENCE.txt](https://www.openslr.org/resources/30/LICENSE.txt) file for license information.
If you use this data in publications, please cite it as follows:
```
@inproceedings{Sodimana2018,
author={Keshan Sodimana and Pasindu {De Silva} and Supheakmungkol Sarin and Oddur Kjartansson and Martin Jansche and Knot Pipatsrisawat and Linne Ha},
title={{A Step-by-Step Process for Building TTS Voices Using Open Source Data and Frameworks for Bangla, Javanese, Khmer, Nepali, Sinhala, and Sundanese}},
year=2018,
booktitle={Proc. The 6th Intl. Workshop on Spoken Language Technologies for Under-Resourced Languages},
pages={66--70},
doi={10.21437/SLTU.2018-14},
url={http://dx.doi.org/10.21437/SLTU.2018-14}
}
``` | [
-0.17228543758392334,
-0.3223586976528168,
0.07053692638874054,
0.44414129853248596,
-0.394339382648468,
0.1573125123977661,
-0.367872416973114,
-0.30519965291023254,
0.4141301214694977,
0.8506192564964294,
-0.27391335368156433,
-0.6181384921073914,
-0.2497309446334839,
0.28800296783447266... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
krandiash/sc09 | krandiash | 2022-02-22T03:26:11Z | 12 | 0 | null | [
"region:us"
] | 2022-02-22T03:26:11Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | # SC09 Dataset
SC09 is a raw audio waveform dataset used in the paper "It's Raw! Audio Generation with State-Space Models". It was previously used as a challenging problem for unconditional audio generation by Donahue et al. (2019), and was originally introduced as a dataset for keyword spotting by Warden (2018). The SC09 dataset consists of 1s clips of utterances of the digits zero through nine across a variety of speakers, with diverse accents and noise conditions.
We include an `sc09.zip` file that contains:
- folders `zero` through `nine`, each containing audio files sampled at 16kHz corresponding to utterances for the digit
- `validation_list.txt` containing the list of validation utterances
- `testing_list.txt` containing the list of testing utterances
- the original `LICENSE` file
We split the data into train-val-test for training SaShiMi models and baselines by following the splits provided in `validation_list.txt` and `testing_list.txt`.
We also include a `sc09_quantized.zip` file, which contains examples that were used in our MTurk study (details of which can be found in the SaShiMi paper). In particular, we take 50 random examples from each digit class and run each through a round of mu-law quantization followed by dequantization. This mimics the quantization noise that is experienced by samples generated by autoregressive models that are trained with mu-law quantization.
You can use the following BibTeX entries to appropriately cite prior work related to this dataset if you decide to use this in your research:
```
@article{goel2022sashimi,
title={It's Raw! Audio Generation with State-Space Models},
author={Goel, Karan and Gu, Albert and Donahue, Chris and R\'{e}, Christopher},
journal={arXiv preprint arXiv:2202.09729},
year={2022}
}
@inproceedings{donahue2019adversarial,
title={Adversarial Audio Synthesis},
author={Donahue, Chris and McAuley, Julian and Puckette, Miller},
booktitle={International Conference on Learning Representations},
year={2019}
}
@article{Warden2018SpeechCA,
title={Speech Commands: A Dataset for Limited-Vocabulary Speech Recognition},
author={Pete Warden},
journal={ArXiv},
year={2018},
volume={abs/1804.03209}
}
``` | [
-0.27088433504104614,
-0.26605966687202454,
0.4014512896537781,
-0.12654218077659607,
-0.2862309217453003,
-0.09261593222618103,
-0.26702260971069336,
-0.07214595377445221,
0.22108392417430878,
0.4986369013786316,
-0.5998106598854065,
-0.45273342728614807,
-0.2464301735162735,
-0.001605130... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
laion/laion_100m_vqgan_f8 | laion | 2021-12-25T05:27:42Z | 12 | 2 | null | [
"region:us"
] | 2021-12-25T05:27:42Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | # VQGAN (f8, 8192) embeddings for LAION-100M
This dataset contains __VQGAN (f8, 8192)__ embeddings for the images
from the first ~100 million image-text pairs of the [LAION-400M dataset](https://laion.ai/laion-400-open-dataset/).
VQGAN was introduced in the paper
["Taming Transformers for High-Resolution Image Synthesis"](https://github.com/CompVis/taming-transformers)
and adopted for training [DALLE-mini](https://github.com/borisdayma/dalle-mini).
**Warning**: This large-scale dataset is non-curated. It was built for research purposes to enable testing model training on larger scale for broad researcher and other interested communities, and **is not meant for any real-world production or application.**
[VQGAN (f8, 8192)](https://github.com/CompVis/taming-transformers#overview-of-pretrained-models)
is a pretrained model with downsampling factor `f=8`, 8192 codebook entries, and Gumbel quantization.
We did not perform any fine-tuning and used the VQGAN wrapper from the [DALLE-pytorch](https://github.com/lucidrains/DALLE-pytorch) repository for inference. Since LAION-400M contains 256x256 images, the model produces 1024 codes for each image.
The data is provided as `*.parquet` files with the embeddings and meta information:
- The embeddings (`code` column) are represented as binary data that can be decoded
using `np.frombuffer(data, np.int16).reshape(32, 32)`.
- The meta information (`caption`, `url`, and other columns) is the same as in the `*.parquet` files from LAION-400M
(see description [here](https://laion.ai/laion-400-open-dataset/)).
- This dataset does not contain the original images.
The data corresponds to the shards `00000`, `00001`, ..., `09999` of LAION-400M.
0.07% of the shards were excluded since they were corrupted in the original dataset.
The LAION-400M dataset is distributed under the [CC-BY 4.0 license](https://creativecommons.org/licenses/by/4.0/).
The VQGAN models are distributed under the [MIT license](https://github.com/CompVis/taming-transformers/blob/master/License.txt).
| [
-0.3930020034313202,
-0.42463868856430054,
0.36096030473709106,
-0.05572972074151039,
-0.3753795623779297,
-0.31549668312072754,
-0.004498734138906002,
-0.01381908729672432,
0.0734691396355629,
0.8945502042770386,
-0.3291590213775635,
-0.6780906319618225,
-0.5024154186248779,
-0.1119147539... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
lewtun/bulk-superb-s3p-superb-49606 | lewtun | 2021-08-02T16:38:29Z | 12 | 1 | null | [
"benchmark:superb",
"region:us"
] | 2021-08-02T16:38:29Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | ---
benchmark: superb
task: asr
type: prediction
---
# Batch job
model_id: lewtun/superb-s3prl-osanseviero__hubert_base-asr-cbcd177a
dataset_name: superb
dataset_config: asr
dataset_split: test
dataset_column: file | [
-0.283635675907135,
-0.30857187509536743,
0.40719640254974365,
0.25324076414108276,
-0.77559894323349,
-0.050224244594573975,
0.4518803656101227,
0.36480817198753357,
0.07891969382762909,
0.9060512781143188,
-0.889355480670929,
-0.5302286148071289,
-0.9762235879898071,
0.18456773459911346,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
mammut/mammut-corpus-venezuela-test-set | mammut | 2022-10-22T08:58:48Z | 12 | 0 | null | [
"task_ids:language-modeling",
"annotations_creators:no-annotation",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:original",
"language:es",
"license:cc-by-nc-nd-4.0",
"region:us"
] | 2022-10-22T08:58:48Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | ---
annotations_creators:
- no-annotation
language_creators:
- expert-generated
language:
- es
language_bcp47:
- es-VE
license:
- cc-by-nc-nd-4.0
multilinguality:
- monolingual
pretty_name: mammut-corpus-venezuela
size_categories:
- unknown
source_datasets:
- original
task_categories:
- sequence-modeling
task_ids:
- language-modeling
---
# mammut-corpus-venezuela
HuggingFace Dataset for testing purposes. The train dataset is `mammut/mammut-corpus-venezuela`.
## 1. How to use
How to load this dataset directly with the datasets library:
`>>> from datasets import load_dataset`
`>>> dataset = load_dataset("mammut/mammut-corpus-venezuela")`
## 2. Dataset Summary
**mammut-corpus-venezuela** is a dataset for Spanish language modeling. This dataset comprises a large number of Venezuelan and Latin-American Spanish texts, manually selected and collected in 2021. The data was collected by a process of web scraping from different portals, downloading of Telegram group chats' history, and selecting of Venezuelan and Latin-American Spanish corpus available online. The texts come from Venezuelan Spanish speakers, subtitlers, journalists, politicians, doctors, writers, and online sellers. Social biases may be present, and a percentage of the texts may be fake or contain misleading or offensive language.
Each record in the dataset contains the author of the text (anonymized for conversation authors), the date on which the text entered in the corpus, the text which was automatically tokenized at sentence level for sources other than conversations, the source of the text, the title of the text, the number of tokens (excluding punctuation marks) of the text, and the linguistic register of the text.
This is the test set for `mammut/mammut-corpus-venezuela` dataset.
## 3. Supported Tasks and Leaderboards
This dataset can be used for language modeling testing.
## 4. Languages
The dataset contains Venezuelan and Latin-American Spanish.
## 5. Dataset Structure
Dataset structure features.
### 5.1 Data Instances
An example from the dataset:
"AUTHOR":"author in title",
"TITLE":"Luis Alberto Buttó: Hecho en socialismo",
"SENTENCE":"Históricamente, siempre fue así.",
"DATE":"2021-07-04 07:18:46.918253",
"SOURCE":"la patilla",
"TOKENS":"4",
"TYPE":"opinion/news",
The average word token count are provided below:
### 5.2 Total of tokens (no spelling marks)
Test: 4,876,739.
### 5.3 Data Fields
The data have several fields:
AUTHOR: author of the text. It is anonymized for conversation authors.
DATE: date on which the text was entered in the corpus.
SENTENCE: text. It was automatically tokenized for sources other than conversations.
SOURCE: source of the texts.
TITLE: title of the text from which SENTENCE originates.
TOKENS: number of tokens (excluding punctuation marks) of SENTENCE.
TYPE: linguistic register of the text.
### 5.4 Data Splits
The mammut-corpus-venezuela dataset has 2 splits: train and test. Below are the statistics:
Number of Instances in Split.
Test: 157,011.
## 6. Dataset Creation
### 6.1 Curation Rationale
The purpose of the mammut-corpus-venezuela dataset is language modeling. It can be used for pre-training a model from scratch or for fine-tuning on another pre-trained model.
### 6.2 Source Data
**6.2.1 Initial Data Collection and Normalization**
The data consists of opinion articles and text messages. It was collected by a process of web scraping from different portals, downloading of Telegram group chats’ history and selecting of Venezuelan and Latin-American Spanish corpus available online.
The text from the web scraping process was separated in sentences and was automatically tokenized for sources other than conversations.
An arrow parquet file was created.
Text sources: El Estímulo (website), cinco8 (website), csm-1990 (oral speaking corpus), "El atajo más largo" (blog), El Pitazo (website), La Patilla (website), Venezuelan movies subtitles, Preseea Mérida (oral speaking corpus), Prodavinci (website), Runrunes (website), and Telegram group chats.
**6.2.2 Who are the source language producers?**
The texts come from Venezuelan Spanish speakers, subtitlers, journalists, politicians, doctors, writers, and online sellers.
## 6.3 Annotations
**6.3.1 Annotation process**
At the moment the dataset does not contain any additional annotations.
**6.3.2 Who are the annotators?**
Not applicable.
### 6.4 Personal and Sensitive Information
The data is partially anonymized. Also, there are messages from Telegram selling chats, some percentage of these messages may be fake or contain misleading or offensive language.
## 7. Considerations for Using the Data
### 7.1 Social Impact of Dataset
The purpose of this dataset is to help the development of language modeling models (pre-training or fine-tuning) in Venezuelan Spanish.
### 7.2 Discussion of Biases
Most of the content comes from political, economical and sociological opinion articles. Social biases may be present.
### 7.3 Other Known Limitations
(If applicable, description of the other limitations in the data.)
Not applicable.
## 8. Additional Information
### 8.1 Dataset Curators
The data was originally collected by Lino Urdaneta and Miguel Riveros from Mammut.io.
### 8.2 Licensing Information
Not applicable.
### 8.3 Citation Information
Not applicable.
### 8.4 Contributions
Not applicable.
| [
-0.2878764569759369,
-0.6722034811973572,
0.0007494759629480541,
0.3017856180667877,
-0.4068656861782074,
0.20628878474235535,
-0.28294652700424194,
-0.2071673721075058,
0.6421251893043518,
0.524351179599762,
-0.43100130558013916,
-0.8140934109687805,
-0.7556403279304504,
0.135562494397163... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
maximedb/paws-x-all | maximedb | 2021-10-20T16:28:36Z | 12 | 0 | null | [
"region:us"
] | 2021-10-20T16:28:36Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
maximedb/vaccinchat_retrieval | maximedb | 2021-12-20T21:41:02Z | 12 | 0 | null | [
"region:us"
] | 2021-12-20T21:41:02Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
midas/semeval2010 | midas | 2022-03-05T03:24:16Z | 12 | 0 | null | [
"arxiv:1910.08840",
"region:us"
] | 2022-03-05T03:24:16Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 |
A dataset for benchmarking keyphrase extraction and generation techniques from long document english scientific articles. For more details about the dataset please refer the original paper - [https://dl.acm.org/doi/10.5555/1859664.1859668](https://dl.acm.org/doi/10.5555/1859664.1859668)
Original source of the data - [https://github.com/boudinfl/semeval-2010-pre](https://github.com/boudinfl/semeval-2010-pre)
## Dataset Summary
The Semeval-2010 dataset was originally proposed by *Su Nam Kim et al* in the paper titled - [SemEval-2010 Task 5: Automatic Keyphrase Extraction from Scientific Articles](https://aclanthology.org/S10-1004.pdf) in the year 2010. The dataset consists of a set of 284 English scientific papers from the ACM Digital Library (conference and work-shop papers). The selected articles belong to the
following four 1998 ACM classifications: C2.4 (Distributed Systems), H3.3 (Information Search and Retrieval), I2.11 (Distributed Artificial Intelligence – Multiagent Systems) and J4 (Socialand Behavioral Sciences – Economics). Each paper has two sets of keyphrases annotated by readers and author. The original dataset was divided into trail, training and test splits, evenly distributed across four domains. The trial, training and test splits had 40, 144 and 100 articles respectively, and the trial split was a subset of training split. We provide test and train splits with 100 and 144 articles respectively.
The dataset shared over here categorizes the keyphrases into *extractive* and *abstractive*. **Extractive keyphrases** are those that could be found in the input text and the **abstractive keyphrases** are those that are not present in the input text. In order to get all the meta-data about the documents and keyphrases please refer to the [original source](https://github.com/boudinfl/semeval-2010-pre) from which the dataset was taken from. The main motivation behind making this dataset available in the form as presented over here is to make it easy for the researchers to programmatically download it and evaluate their models for the tasks of keyphrase extraction and generation. As keyphrase extraction by treating it as a sequence tagging task and using contextual language models has become popular - [Keyphrase extraction from scholarly articles as sequence labeling using contextualized embeddings](https://arxiv.org/pdf/1910.08840.pdf), we have also made the token tags available in the BIO tagging format.
## Dataset Structure
### Data Fields
- **id**: unique identifier of the document.
- **document**: Whitespace separated list of words in the document.
- **doc_bio_tags**: BIO tags for each word in the document. B stands for the beginning of a keyphrase and I stands for inside the keyphrase. O stands for outside the keyphrase and represents the word that isn't a part of the keyphrase at all.
- **extractive_keyphrases**: List of all the present keyphrases.
- **abstractive_keyphrase**: List of all the absent keyphrases.
### Data Splits
|Split| #datapoints |
|--|--|
| Test | 100 |
| Train | 144 |
Train
- Percentage of keyphrases that are named entities: 63.01% (named entities detected using scispacy - en-core-sci-lg model)
- Percentage of keyphrases that are noun phrases: 82.50% (noun phrases detected using spacy en-core-web-lg after removing determiners)
Test
- Percentage of keyphrases that are named entities: 62.06% (named entities detected using scispacy - en-core-sci-lg model)
- Percentage of keyphrases that are noun phrases: 78.36% (noun phrases detected using spacy after removing determiners)
## Usage
### Full Dataset
```python
from datasets import load_dataset
# get entire dataset
dataset = load_dataset("midas/semeval2010", "raw")
# sample from the train split
print("Sample from train dataset split")
test_sample = dataset["train"][0]
print("Fields in the sample: ", [key for key in test_sample.keys()])
print("Tokenized Document: ", test_sample["document"])
print("Document BIO Tags: ", test_sample["doc_bio_tags"])
print("Extractive/present Keyphrases: ", test_sample["extractive_keyphrases"])
print("Abstractive/absent Keyphrases: ", test_sample["abstractive_keyphrases"])
print("\n-----------\n")
# sample from the test split
print("Sample from test dataset split")
test_sample = dataset["test"][0]
print("Fields in the sample: ", [key for key in test_sample.keys()])
print("Tokenized Document: ", test_sample["document"])
print("Document BIO Tags: ", test_sample["doc_bio_tags"])
print("Extractive/present Keyphrases: ", test_sample["extractive_keyphrases"])
print("Abstractive/absent Keyphrases: ", test_sample["abstractive_keyphrases"])
print("\n-----------\n")
```
**Output**
```bash
Sample from train dataset split
Fields in the sample: ['id', 'document', 'doc_bio_tags', 'extractive_keyphrases', 'abstractive_keyphrases', 'other_metadata']
Tokenized Document: ['HITS', 'on', 'the', 'Web:', 'How', 'does', 'it', 'Compare?', 'Marc', 'Najork', 'Microsoft', 'Research', '1065', 'La', 'Avenida', 'Mountain', 'View,', 'CA,', 'USA', 'najork@microsoft.com', 'Hugo', 'Zaragoza', '∗', 'Yahoo!', 'Research', 'Barcelona', 'Ocata', '1', 'Barcelona', '08003,', 'Spain', 'hugoz@es.yahoo-inc.com', 'Michael', 'Taylor', 'Microsoft', 'Research', '7', 'J', 'J', 'Thompson', 'Ave', 'Cambridge', 'CB3', '0FB,', 'UK', 'mitaylor@microsoft.com', 'ABSTRACT', 'This', 'paper', 'describes', 'a', 'large-scale', 'evaluation', 'of', 'the', 'effectiveness', 'of', 'HITS', 'in', 'comparison', 'with', 'other', 'link-based', 'ranking', 'algorithms,', 'when', 'used', 'in', 'combination', 'with', 'a', 'state-ofthe-art', 'text', 'retrieval', 'algorithm', 'exploiting', 'anchor', 'text.', 'We', 'quantified', 'their', 'effectiveness', 'using', 'three', 'common', 'performance', 'measures:', 'the', 'mean', 'reciprocal', 'rank,', 'the', 'mean', 'average', 'precision,', 'and', 'the', 'normalized', 'discounted', 'cumulative', 'gain', 'measurements.', 'The', 'evaluation', 'is', 'based', 'on', 'two', 'large', 'data', 'sets:', 'a', 'breadth-first', 'search', 'crawl', 'of', '463', 'million', 'web', 'pages', 'containing', '17.6', 'billion', 'hyperlinks', 'and', 'referencing', '2.9', 'billion', 'distinct', 'URLs;', 'and', 'a', 'set', 'of', '28,043', 'queries', 'sampled', 'from', 'a', 'query', 'log,', 'each', 'query', 'having', 'on', 'average', '2,383', 'results,', 'about', '17', 'of', 'which', 'were', 'labeled', 'by', 'judges.', 'We', 'found', 'that', 'HITS', 'outperforms', 'PageRank,', 'but', 'is', 'about', 'as', 'effective', 'as', 'web-page', 'in-degree.', 'The', 'same', 'holds', 'true', 'when', 'any', 'of', 'the', 'link-based', 'features', 'are', 'combined', 'with', 'the', 'text', 'retrieval', 'algorithm.', 'Finally,', 'we', 'studied', 'the', 'relationship', 'between', 'query', 'specificity', 'and', 'the', 'effectiveness', 'of', 'selected', 'features,', 'and', 'found', 'that', 'link-based', 'features', 'perform', 'better', 'for', 'general', 'queries,', 'whereas', 'BM25F', 'performs', 'better', 'for', 'specific', 'queries.', 'Categories', 'and', 'Subject', 'Descriptors', 'H.3.3', '[Information', 'Search', 'and', 'Retrieval]:', 'Information', 'Storage', 'and', 'Retrieval-search', 'process,', 'selection', 'process', 'General', 'Terms', 'Algorithms,', 'Measurement,', 'Experimentation', '1.', 'INTRODUCTION', 'Link', 'graph', 'features', 'such', 'as', 'in-degree', 'and', 'PageRank', 'have', 'been', 'shown', 'to', 'significantly', 'improve', 'the', 'performance', 'of', 'text', 'retrieval', 'algorithms', 'on', 'the', 'web.', 'The', 'HITS', 'algorithm', 'is', 'also', 'believed', 'to', 'be', 'of', 'interest', 'for', 'web', 'search;', 'to', 'some', 'degree,', 'one', 'may', 'expect', 'HITS', 'to', 'be', 'more', 'informative', 'that', 'other', 'link-based', 'features', 'because', 'it', 'is', 'query-dependent:', 'it', 'tries', 'to', 'measure', 'the', 'interest', 'of', 'pages', 'with', 'respect', 'to', 'a', 'given', 'query.', 'However,', 'it', 'remains', 'unclear', 'today', 'whether', 'there', 'are', 'practical', 'benefits', 'of', 'HITS', 'over', 'other', 'link', 'graph', 'measures.', 'This', 'is', 'even', 'more', 'true', 'when', 'we', 'consider', 'that', 'modern', 'retrieval', 'algorithms', 'used', 'on', 'the', 'web', 'use', 'a', 'document', 'representation', 'which', 'incorporates', 'the', 'document"s', 'anchor', 'text,', 'i.e.', 'the', 'text', 'of', 'incoming', 'links.', 'This,', 'at', 'least', 'to', 'some', 'degree,', 'takes', 'the', 'link', 'graph', 'into', 'account,', 'in', 'a', 'query-dependent', 'manner.', 'Comparing', 'HITS', 'to', 'PageRank', 'or', 'in-degree', 'empirically', 'is', 'no', 'easy', 'task.', 'There', 'are', 'two', 'main', 'difficulties:', 'scale', 'and', 'relevance.', 'Scale', 'is', 'important', 'because', 'link-based', 'features', 'are', 'known', 'to', 'improve', 'in', 'quality', 'as', 'the', 'document', 'graph', 'grows.', 'If', 'we', 'carry', 'out', 'a', 'small', 'experiment,', 'our', 'conclusions', 'won"t', 'carry', 'over', 'to', 'large', 'graphs', 'such', 'as', 'the', 'web.', 'However,', 'computing', 'HITS', 'efficiently', 'on', 'a', 'graph', 'the', 'size', 'of', 'a', 'realistic', 'web', 'crawl', 'is', 'extraordinarily', 'difficult.', 'Relevance', 'is', 'also', 'crucial', 'because', 'we', 'cannot', 'measure', 'the', 'performance', 'of', 'a', 'feature', 'in', 'the', 'absence', 'of', 'human', 'judgments:', 'what', 'is', 'crucial', 'is', 'ranking', 'at', 'the', 'top', 'of', 'the', 'ten', 'or', 'so', 'documents', 'that', 'a', 'user', 'will', 'peruse.', 'To', 'our', 'knowledge,', 'this', 'paper', 'is', 'the', 'first', 'attempt', 'to', 'evaluate', 'HITS', 'at', 'a', 'large', 'scale', 'and', 'compare', 'it', 'to', 'other', 'link-based', 'features', 'with', 'respect', 'to', 'human', 'evaluated', 'judgment.', 'Our', 'results', 'confirm', 'many', 'of', 'the', 'intuitions', 'we', 'have', 'about', 'link-based', 'features', 'and', 'their', 'relationship', 'to', 'text', 'retrieval', 'methods', 'exploiting', 'anchor', 'text.', 'This', 'is', 'reassuring:', 'in', 'the', 'absence', 'of', 'a', 'theoretical', 'model', 'capable', 'of', 'tying', 'these', 'measures', 'with', 'relevance,', 'the', 'only', 'way', 'to', 'validate', 'our', 'intuitions', 'is', 'to', 'carry', 'out', 'realistic', 'experiments.', 'However,', 'we', 'were', 'quite', 'surprised', 'to', 'find', 'that', 'HITS,', 'a', 'query-dependent', 'feature,', 'is', 'about', 'as', 'effective', 'as', 'web', 'page', 'in-degree,', 'the', 'most', 'simpleminded', 'query-independent', 'link-based', 'feature.', 'This', 'continues', 'to', 'be', 'true', 'when', 'the', 'link-based', 'features', 'are', 'combined', 'with', 'a', 'text', 'retrieval', 'algorithm', 'exploiting', 'anchor', 'text.', 'The', 'remainder', 'of', 'this', 'paper', 'is', 'structured', 'as', 'follows:', 'Section', '2', 'surveys', 'related', 'work.', 'Section', '3', 'describes', 'the', 'data', 'sets', 'we', 'used', 'in', 'our', 'study.', 'Section', '4', 'reviews', 'the', 'performance', 'measures', 'we', 'used.', 'Sections', '5', 'and', '6', 'describe', 'the', 'PageRank', 'and', 'HITS', 'algorithms', 'in', 'more', 'detail,', 'and', 'sketch', 'the', 'computational', 'infrastructure', 'we', 'employed', 'to', 'carry', 'out', 'large', 'scale', 'experiments.', 'Section', '7', 'presents', 'the', 'results', 'of', 'our', 'evaluations,', 'and', 'Section', '8', 'offers', 'concluding', 'remarks.', '2.', 'RELATED', 'WORK', 'The', 'idea', 'of', 'using', 'hyperlink', 'analysis', 'for', 'ranking', 'web', 'search', 'results', 'arose', 'around', '1997,', 'and', 'manifested', 'itself', 'in', 'the', 'HITS', '[16,', '17]', 'and', 'PageRank', '[5,', '21]', 'algorithms.', 'The', 'popularity', 'of', 'these', 'two', 'algorithms', 'and', 'the', 'phenomenal', 'success', 'of', 'the', 'Google', 'search', 'engine,', 'which', 'uses', 'PageRank,', 'have', 'spawned', 'a', 'large', 'amount', 'of', 'subsequent', 'research.', 'There', 'are', 'numerous', 'attempts', 'at', 'improving', 'the', 'effectiveness', 'of', 'HITS', 'and', 'PageRank.', 'Query-dependent', 'link-based', 'ranking', 'algorithms', 'inspired', 'by', 'HITS', 'include', 'SALSA', '[19],', 'Randomized', 'HITS', '[20],', 'and', 'PHITS', '[7],', 'to', 'name', 'a', 'few.', 'Query-independent', 'link-based', 'ranking', 'algorithms', 'inspired', 'by', 'PageRank', 'include', 'TrafficRank', '[22],', 'BlockRank', '[14],', 'and', 'TrustRank', '[11],', 'and', 'many', 'others.', 'Another', 'line', 'of', 'research', 'is', 'concerned', 'with', 'analyzing', 'the', 'mathematical', 'properties', 'of', 'HITS', 'and', 'PageRank.', 'For', 'example,', 'Borodin', 'et', 'al.', '[3]', 'investigated', 'various', 'theoretical', 'properties', 'of', 'PageRank,', 'HITS,', 'SALSA,', 'and', 'PHITS,', 'including', 'their', 'similarity', 'and', 'stability,', 'while', 'Bianchini', 'et', 'al.', '[2]', 'studied', 'the', 'relationship', 'between', 'the', 'structure', 'of', 'the', 'web', 'graph', 'and', 'the', 'distribution', 'of', 'PageRank', 'scores,', 'and', 'Langville', 'and', 'Meyer', 'examined', 'basic', 'properties', 'of', 'PageRank', 'such', 'as', 'existence', 'and', 'uniqueness', 'of', 'an', 'eigenvector', 'and', 'convergence', 'of', 'power', 'iteration', '[18].', 'Given', 'the', 'attention', 'that', 'has', 'been', 'paid', 'to', 'improving', 'the', 'effectiveness', 'of', 'PageRank', 'and', 'HITS,', 'and', 'the', 'thorough', 'studies', 'of', 'the', 'mathematical', 'properties', 'of', 'these', 'algorithms,', 'it', 'is', 'somewhat', 'surprising', 'that', 'very', 'few', 'evaluations', 'of', 'their', 'effectiveness', 'have', 'been', 'published.', 'We', 'are', 'aware', 'of', 'two', 'studies', 'that', 'have', 'attempted', 'to', 'formally', 'evaluate', 'the', 'effectiveness', 'of', 'HITS', 'and', 'of', 'PageRank.', 'Amento', 'et', 'al.', '[1]', 'employed', 'quantitative', 'measures,', 'but', 'based', 'their', 'experiments', 'on', 'the', 'result', 'sets', 'of', 'just', '5', 'queries', 'and', 'the', 'web-graph', 'induced', 'by', 'topical', 'crawls', 'around', 'the', 'result', 'set', 'of', 'each', 'query.', 'A', 'more', 'recent', 'study', 'by', 'Borodin', 'et', 'al.', '[4]', 'is', 'based', 'on', '34', 'queries,', 'result', 'sets', 'of', '200', 'pages', 'per', 'query', 'obtained', 'from', 'Google,', 'and', 'a', 'neighborhood', 'graph', 'derived', 'by', 'retrieving', '50', 'in-links', 'per', 'result', 'from', 'Google.', 'By']
Document BIO Tags: ['O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'I', 'I', 'O', 'B', 'I', 'I', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'I', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'I', 'O', 'O', 'O', 'O', 'O', 'B', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'I', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'I', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'I', 'I', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'I', 'O', 'B', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'O', 'O', 'B', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'O', 'O', 'O', 'B', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O']
Extractive/present Keyphrases: ['ranking', 'pagerank', 'mean reciprocal rank', 'mean average precision', 'query specificity', 'link graph', 'scale and relevance', 'hyperlink analysis', 'rank', 'bm25f', 'mrr', 'map', 'ndcg']
Abstractive/absent Keyphrases: ['normalized discounted cumulative gain measurement', 'breadth-first search crawl', 'feature selection', 'link-based feature', 'quantitative measure', 'crawled web page', 'hit']
-----------
Sample from test dataset split
Fields in the sample: ['id', 'document', 'doc_bio_tags', 'extractive_keyphrases', 'abstractive_keyphrases', 'other_metadata']
Tokenized Document: ['Live', 'Data', 'Center', 'Migration', 'across', 'WANs:', 'A', 'Robust', 'Cooperative', 'Context', 'Aware', 'Approach', 'K.K.', 'Ramakrishnan,', 'Prashant', 'Shenoy', ',', 'Jacobus', 'Van', 'der', 'Merwe', 'AT&T', 'Labs-Research', '/', 'University', 'of', 'Massachusetts', 'ABSTRACT', 'A', 'significant', 'concern', 'for', 'Internet-based', 'service', 'providers', 'is', 'the', 'continued', 'operation', 'and', 'availability', 'of', 'services', 'in', 'the', 'face', 'of', 'outages,', 'whether', 'planned', 'or', 'unplanned.', 'In', 'this', 'paper', 'we', 'advocate', 'a', 'cooperative,', 'context-aware', 'approach', 'to', 'data', 'center', 'migration', 'across', 'WANs', 'to', 'deal', 'with', 'outages', 'in', 'a', 'non-disruptive', 'manner.', 'We', 'specifically', 'seek', 'to', 'achieve', 'high', 'availability', 'of', 'data', 'center', 'services', 'in', 'the', 'face', 'of', 'both', 'planned', 'and', 'unanticipated', 'outages', 'of', 'data', 'center', 'facilities.', 'We', 'make', 'use', 'of', 'server', 'virtualization', 'technologies', 'to', 'enable', 'the', 'replication', 'and', 'migration', 'of', 'server', 'functions.', 'We', 'propose', 'new', 'network', 'functions', 'to', 'enable', 'server', 'migration', 'and', 'replication', 'across', 'wide', 'area', 'networks', '(e.g.,', 'the', 'Internet),', 'and', 'finally', 'show', 'the', 'utility', 'of', 'intelligent', 'and', 'dynamic', 'storage', 'replication', 'technology', 'to', 'ensure', 'applications', 'have', 'access', 'to', 'data', 'in', 'the', 'face', 'of', 'outages', 'with', 'very', 'tight', 'recovery', 'point', 'objectives.', 'Categories', 'and', 'Subject', 'Descriptors', 'C.2.4', '[Computer-Communication', 'Networks]:', 'Distributed', 'Systems', 'General', 'Terms', 'Design,', 'Reliability', '1.', 'INTRODUCTION', 'A', 'significant', 'concern', 'for', 'Internet-based', 'service', 'providers', 'is', 'the', 'continued', 'operation', 'and', 'availability', 'of', 'services', 'in', 'the', 'face', 'of', 'outages,', 'whether', 'planned', 'or', 'unplanned.', 'These', 'concerns', 'are', 'exacerbated', 'by', 'the', 'increased', 'use', 'of', 'the', 'Internet', 'for', 'mission', 'critical', 'business', 'and', 'real-time', 'entertainment', 'applications.', 'A', 'relatively', 'minor', 'outage', 'can', 'disrupt', 'and', 'inconvenience', 'a', 'large', 'number', 'of', 'users.', 'Today', 'these', 'services', 'are', 'almost', 'exclusively', 'hosted', 'in', 'data', 'centers.', 'Recent', 'advances', 'in', 'server', 'virtualization', 'technologies', '[8,', '14,', '22]', 'allow', 'for', 'the', 'live', 'migration', 'of', 'services', 'within', 'a', 'local', 'area', 'network', '(LAN)', 'environment.', 'In', 'the', 'LAN', 'environment,', 'these', 'technologies', 'have', 'proven', 'to', 'be', 'a', 'very', 'effective', 'tool', 'to', 'enable', 'data', 'center', 'management', 'in', 'a', 'non-disruptive', 'fashion.', 'Not', 'only', 'can', 'it', 'support', 'planned', 'maintenance', 'events', '[8],', 'but', 'it', 'can', 'also', 'be', 'used', 'in', 'a', 'more', 'dynamic', 'fashion', 'to', 'automatically', 'balance', 'load', 'between', 'the', 'physical', 'servers', 'in', 'a', 'data', 'center', '[22].', 'When', 'using', 'these', 'technologies', 'in', 'a', 'LAN', 'environment,', 'services', 'execute', 'in', 'a', 'virtual', 'server,', 'and', 'the', 'migration', 'services', 'provided', 'by', 'the', 'underlying', 'virtualization', 'framework', 'allows', 'for', 'a', 'virtual', 'server', 'to', 'be', 'migrated', 'from', 'one', 'physical', 'server', 'to', 'another,', 'without', 'any', 'significant', 'downtime', 'for', 'the', 'service', 'or', 'application.', 'In', 'particular,', 'since', 'the', 'virtual', 'server', 'retains', 'the', 'same', 'network', 'address', 'as', 'before,', 'any', 'ongoing', 'network', 'level', 'interactions', 'are', 'not', 'disrupted.', 'Similarly,', 'in', 'a', 'LAN', 'environment,', 'storage', 'requirements', 'are', 'normally', 'met', 'via', 'either', 'network', 'attached', 'storage', '(NAS)', 'or', 'via', 'a', 'storage', 'area', 'network', '(SAN)', 'which', 'is', 'still', 'reachable', 'from', 'the', 'new', 'physical', 'server', 'location', 'to', 'allow', 'for', 'continued', 'storage', 'access.', 'Unfortunately', 'in', 'a', 'wide', 'area', 'environment', '(WAN),', 'live', 'server', 'migration', 'is', 'not', 'as', 'easily', 'achievable', 'for', 'two', 'reasons:', 'First,', 'live', 'migration', 'requires', 'the', 'virtual', 'server', 'to', 'maintain', 'the', 'same', 'network', 'address', 'so', 'that', 'from', 'a', 'network', 'connectivity', 'viewpoint', 'the', 'migrated', 'server', 'is', 'indistinguishable', 'from', 'the', 'original.', 'While', 'this', 'is', 'fairly', 'easily', 'achieved', 'in', 'a', 'shared', 'LAN', 'environment,', 'no', 'current', 'mechanisms', 'are', 'available', 'to', 'efficiently', 'achieve', 'the', 'same', 'feat', 'in', 'a', 'WAN', 'environment.', 'Second,', 'while', 'fairly', 'sophisticated', 'remote', 'replication', 'mechanisms', 'have', 'been', 'developed', 'in', 'the', 'context', 'of', 'disaster', 'recovery', '[20,', '7,', '11],', 'these', 'mechanisms', 'are', 'ill', 'suited', 'to', 'live', 'data', 'center', 'migration,', 'because', 'in', 'general', 'the', 'available', 'technologies', 'are', 'unaware', 'of', 'application/service', 'level', 'semantics.', 'In', 'this', 'paper', 'we', 'outline', 'a', 'design', 'for', 'live', 'service', 'migration', 'across', 'WANs.', 'Our', 'design', 'makes', 'use', 'of', 'existing', 'server', 'virtualization', 'technologies', 'and', 'propose', 'network', 'and', 'storage', 'mechanisms', 'to', 'facilitate', 'migration', 'across', 'a', 'WAN.', 'The', 'essence', 'of', 'our', 'approach', 'is', 'cooperative,', 'context', 'aware', 'migration,', 'where', 'a', 'migration', 'management', 'system', 'orchestrates', 'the', 'data', 'center', 'migration', 'across', 'all', 'three', 'subsystems', 'involved,', 'namely', 'the', 'server', 'platforms,', 'the', 'wide', 'area', 'network', 'and', 'the', 'disk', 'storage', 'system.', 'While', 'conceptually', 'similar', 'in', 'nature', 'to', 'the', 'LAN', 'based', 'work', 'described', 'above,', 'using', 'migration', 'technologies', 'across', 'a', 'wide', 'area', 'network', 'presents', 'unique', 'challenges', 'and', 'has', 'to', 'our', 'knowledge', 'not', 'been', 'achieved.', 'Our', 'main', 'contribution', 'is', 'the', 'design', 'of', 'a', 'framework', 'that', 'will', 'allow', 'the', 'migration', 'across', 'a', 'WAN', 'of', 'all', 'subsystems', 'involved', 'with', 'enabling', 'data', 'center', 'services.', 'We', 'describe', 'new', 'mechanisms', 'as', 'well', 'as', 'extensions', 'to', 'existing', 'technologies', 'to', 'enable', 'this', 'and', 'outline', 'the', 'cooperative,', 'context', 'aware', 'functionality', 'needed', 'across', 'the', 'different', 'subsystems', 'to', 'enable', 'this.', '262', '2.', 'LIVE', 'DATA', 'CENTER', 'MIGRATION', 'ACROSS', 'WANS', 'Three', 'essential', 'subsystems', 'are', 'involved', 'with', 'hosting', 'services', 'in', 'a', 'data', 'center:', 'First,', 'the', 'servers', 'host', 'the', 'application', 'or', 'service', 'logic.', 'Second,', 'services', 'are', 'normally', 'hosted', 'in', 'a', 'data', 'center', 'to', 'provide', 'shared', 'access', 'through', 'a', 'network,', 'either', 'the', 'Internet', 'or', 'virtual', 'private', 'networks', '(VPNs).', 'Finally,', 'most', 'applications', 'require', 'disk', 'storage', 'for', 'storing', 'data', 'and', 'the', 'amount', 'of', 'disk', 'space', 'and', 'the', 'frequency', 'of', 'access', 'varies', 'greatly', 'between', 'different', 'services/applications.', 'Disruptions,', 'failures,', 'or', 'in', 'general,', 'outages', 'of', 'any', 'kind', 'of', 'any', 'of', 'these', 'components', 'will', 'cause', 'service', 'disruption.', 'For', 'this', 'reason,', 'prior', 'work', 'and', 'current', 'practices', 'have', 'addressed', 'the', 'robustness', 'of', 'individual', 'components.', 'For', 'example,', 'data', 'centers', 'typically', 'have', 'multiple', 'network', 'connections', 'and', 'redundant', 'LAN', 'devices', 'to', 'ensure', 'redundancy', 'at', 'the', 'networking', 'level.', 'Similarly,', 'physical', 'servers', 'are', 'being', 'designed', 'with', 'redundant', 'hot-swappable', 'components', '(disks,', 'processor', 'blades,', 'power', 'supplies', 'etc).', 'Finally,', 'redundancy', 'at', 'the', 'storage', 'level', 'can', 'be', 'provided', 'through', 'sophisticated', 'data', 'mirroring', 'technologies.', 'The', 'focus', 'of', 'our', 'work,', 'however,', 'is', 'on', 'the', 'case', 'where', 'such', 'local', 'redundancy', 'mechanisms', 'are', 'not', 'sufficient.', 'Specifically,', 'we', 'are', 'interested', 'in', 'providing', 'service', 'availability', 'when', 'the', 'data', 'center', 'as', 'a', 'whole', 'becomes', 'unavailable,', 'for', 'example', 'because', 'of', 'data', 'center', 'wide', 'maintenance', 'operations,', 'or', 'because', 'of', 'catastrophic', 'events.', 'As', 'such,', 'our', 'basic', 'approach', 'is', 'to', 'migrate', 'services', 'between', 'data', 'centers', 'across', 'the', 'wide', 'are', 'network', '(WAN).', 'By', 'necessity,', 'moving', 'or', 'migrating', 'services', 'from', 'one', 'data', 'center', 'to', 'another', 'needs', 'to', 'consider', 'all', 'three', 'of', 'these', 'components.', 'Historically,', 'such', 'migration', 'has', 'been', 'disruptive', 'in', 'nature,', 'requiring', 'downtime', 'of', 'the', 'actual', 'services', 'involved,', 'or', 'requiring', 'heavy', 'weight', 'replication', 'techniques.', 'In', 'the', 'latter', 'case', 'concurrently', 'running', 'replicas', 'of', 'a', 'service', 'can', 'be', 'made', 'available', 'thus', 'allowing', 'a', 'subset', 'of', 'the', 'service', 'to', 'be', 'migrated', 'or', 'maintained', 'without', 'impacting', 'the', 'service']
Document BIO Tags: ['O', 'B', 'I', 'I', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'I', 'I', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'I', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'O', 'O', 'O', 'B', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'O', 'O', 'O', 'O', 'O', 'B', 'I', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'I', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'I', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'O', 'B', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'O', 'O', 'O', 'O', 'B', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'I', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'I', 'I', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'I', 'I', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'I', 'I', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O']
Extractive/present Keyphrases: ['data center migration', 'wan', 'lan', 'virtual server', 'storage replication', 'synchronous replication', 'asynchronous replication', 'network support', 'storage', 'voip']
Abstractive/absent Keyphrases: ['internet-based service', 'voice-over-ip', 'database']
-----------
```
### Keyphrase Extraction
```python
from datasets import load_dataset
# get the dataset only for keyphrase extraction
dataset = load_dataset("midas/semeval2010", "extraction")
print("Samples for Keyphrase Extraction")
# sample from the train split
print("Sample from train data split")
test_sample = dataset["train"][0]
print("Fields in the sample: ", [key for key in test_sample.keys()])
print("Tokenized Document: ", test_sample["document"])
print("Document BIO Tags: ", test_sample["doc_bio_tags"])
print("\n-----------\n")
# sample from the test split
print("Sample from test data split")
test_sample = dataset["test"][0]
print("Fields in the sample: ", [key for key in test_sample.keys()])
print("Tokenized Document: ", test_sample["document"])
print("Document BIO Tags: ", test_sample["doc_bio_tags"])
print("\n-----------\n")
```
### Keyphrase Generation
```python
# get the dataset only for keyphrase generation
dataset = load_dataset("midas/semeval2010", "generation")
print("Samples for Keyphrase Generation")
# sample from the train split
print("Sample from train data split")
test_sample = dataset["train"][0]
print("Fields in the sample: ", [key for key in test_sample.keys()])
print("Tokenized Document: ", test_sample["document"])
print("Extractive/present Keyphrases: ", test_sample["extractive_keyphrases"])
print("Abstractive/absent Keyphrases: ", test_sample["abstractive_keyphrases"])
print("\n-----------\n")
# sample from the test split
print("Sample from test data split")
test_sample = dataset["test"][0]
print("Fields in the sample: ", [key for key in test_sample.keys()])
print("Tokenized Document: ", test_sample["document"])
print("Extractive/present Keyphrases: ", test_sample["extractive_keyphrases"])
print("Abstractive/absent Keyphrases: ", test_sample["abstractive_keyphrases"])
print("\n-----------\n")
```
## Citation Information
```
@inproceedings{10.5555/1859664.1859668,
author = {Kim, Su Nam and Medelyan, Olena and Kan, Min-Yen and Baldwin, Timothy},
title = {SemEval-2010 Task 5: Automatic Keyphrase Extraction from Scientific Articles},
year = {2010},
publisher = {Association for Computational Linguistics},
address = {USA},
abstract = {This paper describes Task 5 of the Workshop on Semantic Evaluation 2010 (SemEval-2010). Systems are to automatically assign keyphrases or keywords to given scientific articles. The participating systems were evaluated by matching their extracted keyphrases against manually assigned ones. We present the overall ranking of the submitted systems and discuss our findings to suggest future directions for this task.},
booktitle = {Proceedings of the 5th International Workshop on Semantic Evaluation},
pages = {21–26},
numpages = {6},
location = {Los Angeles, California},
series = {SemEval '10}
}
```
## Contributions
Thanks to [@debanjanbhucs](https://github.com/debanjanbhucs), [@dibyaaaaax](https://github.com/dibyaaaaax) and [@ad6398](https://github.com/ad6398) for adding this dataset
| [
-0.4576738476753235,
-0.5709987282752991,
0.49321845173835754,
0.3753224015235901,
-0.14278289675712585,
-0.04151564836502075,
-0.218608096241951,
-0.24237017333507538,
0.44703179597854614,
0.09393757581710815,
-0.3923077881336212,
-0.8020753860473633,
-0.4719471335411072,
0.48561024665832... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ml6team/xsum_nl | ml6team | 2022-10-22T14:47:41Z | 12 | 2 | null | [
"annotations_creators:machine-generated",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:extended|xsum",
"language:nl",
"license:unknown",
"region:us"
] | 2022-10-22T14:47:41Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | ---
annotations_creators:
- machine-generated
language_creators:
- machine-generated
language:
- nl
language_bcp47:
- nl-BE
license:
- unknown
multilinguality:
- monolingual
pretty_name: XSum NL
size_categories:
- unknown
source_datasets:
- extended|xsum
task_categories:
- conditional-text-generation
task_ids:
- summarization
---
# Dataset Card for XSum NL
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset is a machine translated dataset. It's the [XSum dataset](https://huggingface.co/datasets/xsum) translated with [this model](https://huggingface.co/Helsinki-NLP/opus-mt-en-nl) from English to Dutch.
See the [Hugginface page of the original dataset](https://huggingface.co/datasets/xsum) for more information on the format of this dataset.
Use with:
```python
from datasets import load_dataset
load_dataset("csv", "ml6team/xsum_nl")
```
### Languages
Dutch
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
- `id`: BBC ID of the article.
- `document`: a string containing the body of the news article
- `summary`: a string containing a one sentence summary of the article.
### Data Splits
- `train`
- `test`
- `validation`
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset. | [
-0.2983829379081726,
-0.3313617706298828,
0.08993493020534515,
0.18395313620567322,
-0.22970819473266602,
0.055137887597084045,
-0.31126540899276733,
-0.44373440742492676,
0.851157009601593,
0.5541797876358032,
-0.8566786646842957,
-0.8961830139160156,
-0.758533239364624,
0.226438120007514... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
morganchen1007/1216 | morganchen1007 | 2021-12-16T02:19:28Z | 12 | 0 | null | [
"region:us"
] | 2021-12-16T02:19:28Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
msarmi9/korean-english-multitarget-ted-talks-task | msarmi9 | 2022-10-22T15:05:15Z | 12 | 5 | null | [
"annotations_creators:expert-generated",
"language_creators:other",
"multilinguality:translation",
"multilinguality:multilingual",
"language:en",
"language:ko",
"license:cc-by-nc-nd-4.0",
"region:us"
] | 2022-10-22T15:05:15Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | ---
annotations_creators:
- expert-generated
language_creators:
- other
language:
- en
- ko
language_bcp47:
- en-US
- ko-KR
license:
- cc-by-nc-nd-4.0
multilinguality:
- translation
- multilingual
pretty_name: English-Korean Multitarget Ted Talks Task (MTTT)
task_categories:
- conditional-text-generation
task_ids:
- machine-translation
---
# Dataset Card for english-korean-multitarget-ted-talks-task
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://www.cs.jhu.edu/~kevinduh/a/multitarget-tedtalks/
### Dataset Summary
- Parallel English-Korean Text Corpus
- Text was originally transcribed to English from various Ted Talks, then translated to Korean by TED translators
- Approximately 166k train, 2k validation, and 2k test sentence pairs.
### Supported Tasks and Leaderboards
- Machine Translation
### Languages
- English
- Korean
## Additional Information
### Dataset Curators
Kevin Duh, "The Multitarget TED Talks Task", http://www.cs.jhu.edu/~kevinduh/a/multitarget-tedtalks/, 2018
### Licensing Information
TED makes its collection available under the Creative Commons BY-NC-ND license. Please acknowledge TED when using this data. We acknowledge the authorship of TED Talks (BY condition). We are not redistributing the transcripts for commercial purposes (NC condition) nor making derivative works of the original contents (ND condition).
### Citation Information
@misc{duh18multitarget,
author = {Kevin Duh},
title = {The Multitarget TED Talks Task},
howpublished = {\url{http://www.cs.jhu.edu/~kevinduh/a/multitarget-tedtalks/}},
year = {2018},
} | [
-0.254556804895401,
-0.6435467004776001,
0.28348422050476074,
0.2605108916759491,
-0.5488191246986389,
0.44035103917121887,
-0.6035061478614807,
-0.259480744600296,
0.6240746974945068,
0.36200636625289917,
-0.8314594030380249,
-0.9905999302864075,
-0.6484268307685852,
0.10252343118190765,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
nateraw/auto-exp-2 | nateraw | 2021-07-13T07:10:47Z | 12 | 0 | null | [
"task_categories:other",
"auto-generated",
"image-classification",
"region:us"
] | 2021-07-13T07:10:47Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 |
---
task_categories:
- other
task_ids:
- other-image-classification
- image-classification
tags:
- auto-generated
- image-classification
---
# nateraw/auto-exp-2
Image Classification Dataset
## Usage
```python
from PIL import Image
from datasets import load_dataset
def pil_loader(path: str):
with open(path, 'rb') as f:
im = Image.open(f)
return im.convert('RGB')
def image_loader(example_batch):
example_batch['image'] = [
pil_loader(f) for f in example_batch['file']
]
return example_batch
ds = load_dataset('nateraw/auto-exp-2')
ds = ds.with_transform(image_loader)
```
| [
-0.3646184206008911,
-0.3539734482765198,
-0.15279898047447205,
-0.0152265140786767,
-0.41191768646240234,
-0.12877000868320465,
0.023142719641327858,
-0.14023807644844055,
-0.016300324350595474,
0.4927860200405121,
-0.35372260212898254,
-0.3967403769493103,
-0.7517550587654114,
0.04492319... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
nateraw/rock_paper_scissors | nateraw | 2021-07-13T16:07:15Z | 12 | 0 | null | [
"region:us"
] | 2021-07-13T16:07:15Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
pasinit/scotus | pasinit | 2021-09-23T14:18:55Z | 12 | 0 | null | [
"region:us"
] | 2021-09-23T14:18:55Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
patrickvonplaten/librispeech_local | patrickvonplaten | 2021-08-12T00:10:54Z | 12 | 0 | null | [
"region:us"
] | 2021-08-12T00:10:54Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
persiannlp/parsinlu_translation_fa_en | persiannlp | 2022-10-24T17:01:27Z | 12 | 0 | null | [
"task_categories:translation",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:fa",
"multilinguality:en",
"size_categories:1K<n<10K",
"source_datasets:extended",
"language:fa",
"license:cc-by-nc-sa-4.0",
"arxiv:2012.06154",
"region:us"
] | 2022-10-24T17:01:27Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- fa
license:
- cc-by-nc-sa-4.0
multilinguality:
- fa
- en
size_categories:
- 1K<n<10K
source_datasets:
- extended
task_categories:
- translation
task_ids:
- translation
---
# Dataset Card for PersiNLU (Machine Translation)
## Table of Contents
- [Dataset Card for PersiNLU (Machine Translation)](#dataset-card-for-persi_nlu_machine_translation)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Github](https://github.com/persiannlp/parsinlu/)
- **Repository:** [Github](https://github.com/persiannlp/parsinlu/)
- **Paper:** [Arxiv](https://arxiv.org/abs/2012.06154)
- **Leaderboard:**
- **Point of Contact:** d.khashabi@gmail.com
### Dataset Summary
A Persian translation dataset (English -> Persian).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
The text dataset is in Persian (`fa`) and English (`en`).
## Dataset Structure
### Data Instances
Here is an example from the dataset:
```json
{
"source": "چه زحمتها که بکشد تا منابع مالی را تامین کند اصطلاحات را ترویج کند نهادهایی به راه اندازد.",
"targets": ["how toil to raise funds, propagate reforms, initiate institutions!"],
"category": "mizan_dev_en_fa"
}
```
### Data Fields
- `source`: the input sentences, in Persian.
- `targets`: the list of gold target translations in English.
- `category`: the source from which the example is mined.
### Data Splits
The train/dev/test split contains 1,622,281/2,138/47,745 samples.
## Dataset Creation
### Curation Rationale
For details, check [the corresponding draft](https://arxiv.org/abs/2012.06154).
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
CC BY-NC-SA 4.0 License
### Citation Information
```bibtex
@article{huggingface:dataset,
title = {ParsiNLU: A Suite of Language Understanding Challenges for Persian},
authors = {Khashabi, Daniel and Cohan, Arman and Shakeri, Siamak and Hosseini, Pedram and Pezeshkpour, Pouya and Alikhani, Malihe and Aminnaseri, Moin and Bitaab, Marzieh and Brahman, Faeze and Ghazarian, Sarik and others},
year={2020}
journal = {arXiv e-prints},
eprint = {2012.06154},
}
```
### Contributions
Thanks to [@danyaljj](https://github.com/danyaljj) for adding this dataset.
| [
-0.4012879431247711,
-0.634809136390686,
0.14700403809547424,
0.28226637840270996,
-0.30991479754447937,
0.037102144211530685,
-0.6336174011230469,
-0.09174567461013794,
0.3828750252723694,
0.4487611949443817,
-0.6739910244941711,
-0.9795015454292297,
-0.7316592931747437,
0.430854529142379... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
phongdtd/VinDataVLSP | phongdtd | 2022-01-26T06:49:13Z | 12 | 0 | null | [
"license:apache-2.0",
"region:us"
] | 2022-01-26T06:49:13Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | ---
license: apache-2.0
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
phonlab-tcd/cngv1 | phonlab-tcd | 2021-09-02T16:51:12Z | 12 | 0 | null | [
"region:us"
] | 2021-09-02T16:51:12Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
phonlab-tcd/corpuscrawler-ga | phonlab-tcd | 2021-09-05T09:59:34Z | 12 | 1 | null | [
"region:us"
] | 2021-09-05T09:59:34Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
projecte-aina/vilaquad | projecte-aina | 2023-11-25T05:21:59Z | 12 | 1 | null | [
"task_categories:question-answering",
"task_ids:extractive-qa",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:ca",
"license:cc-by-sa-4.0",
"arxiv:2107.07903",
"arxiv:1606.05250"... | 2023-11-25T05:21:59Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- ca
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
pretty_name: VilaQuAD
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- question-answering
task_ids:
- extractive-qa
---
# Dataset Card for VilaQuAD
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://doi.org/10.5281/zenodo.4562337
- **Paper:** [Are Multilingual Models the Best Choice for Moderately Under-resourced Languages? A Comprehensive Assessment for Catalan](https://arxiv.org/abs/2107.07903)
- **Point of Contact:** [Carlos Rodríguez-Penagos](mailto:carlos.rodriguez1@bsc.es) and [Carme Armentano-Oller](mailto:carme.armentano@bsc.es)
### Dataset Summary
VilaQuAD, An extractive QA dataset for Catalan, from [VilaWeb](https://www.vilaweb.cat/) newswire text.
This dataset contains 2095 of Catalan language news articles along with 1 to 5 questions referring to each fragment (or context).
VilaQuad articles are extracted from the daily [VilaWeb](https://www.vilaweb.cat/) and used under [CC-BY-NC-SA-ND](https://creativecommons.org/licenses/by-nc-nd/3.0/deed.ca) licence.
This dataset can be used to build extractive-QA and Language Models.
### Supported Tasks and Leaderboards
Extractive-QA, Language Model.
### Languages
The dataset is in Catalan (`ca-ES`).
## Dataset Structure
### Data Instances
```
{
'id': 'P_556_C_556_Q1',
'title': "El Macba posa en qüestió l'eufòria amnèsica dels anys vuitanta a l'estat espanyol",
'context': "El Macba ha obert una nova exposició, 'Gelatina dura. Històries escamotejades dels 80', dedicada a revisar el discurs hegemònic que es va instaurar en aquella dècada a l'estat espanyol, concretament des del començament de la transició, el 1977, fins a la fita de Barcelona 92. És una mirada en clau espanyola, però també centralista, perquè més enllà dels esdeveniments ocorreguts a Catalunya i els artistes que els van combatre, pràcticament només s'hi mostren fets polítics i culturals generats des de Madrid. No es parla del País Basc, per exemple. Però, dit això, l'exposició revisa aquesta dècada de la història recent tot qüestionant un triomfalisme homogeneïtzador, que ja se sap que va arrasar una gran quantitat de sectors crítics i radicals de l'àmbit social, polític i cultural. Com diu la comissària, Teresa Grandas, de l'equip del Macba: 'El relat oficial dels anys vuitanta a l'estat espanyol va prioritzar la necessitat per damunt de la raó i va consolidar una mirada que privilegiava el futur abans que l'anàlisi del passat recent, obviant qualsevol consideració crítica respecte de la filiació amb el poder franquista.",
'question': 'Com es diu la nova exposició que ha obert el Macba?',
'answers': [
{
'text': "'Gelatina dura. Històries escamotejades dels 80'",
'answer_start': 38
}
]
}
```
### Data Fields
Follows [Rajpurkar, Pranav et al., (2016)](http://arxiv.org/abs/1606.05250) for SQuAD v1 datasets.
- `id` (str): Unique ID assigned to the question.
- `title` (str): Title of the VilaWeb article.
- `context` (str): VilaWeb section text.
- `question` (str): Question.
- `answers` (list): List of answers to the question, each containing:
- `text` (str): Span text answering to the question.
- `answer_start` Starting offset of the span text answering to the question.
### Data Splits
- train.json: 1295 contexts, 3882 questions
- dev.json: 400 contexts, 1200 questions
- test.json: 400 contexts, 1200 questions
## Dataset Creation
### Curation Rationale
We created this dataset to contribute to the development of language models in Catalan, a low-resource language.
### Source Data
- [VilaWeb site](https://www.vilaweb.cat/)
#### Initial Data Collection and Normalization
The source data are scraped articles from archives of Catalan newspaper website [Vilaweb](https://www.vilaweb.cat).
From a the online edition of the newspaper [VilaWeb](https://www.vilaweb.cat), 2095 articles were randomnly selected. These headlines were also used to create a Textual Entailment dataset. For the extractive QA dataset, creation of between 1 and 5 questions for each news context was commissioned, following an adaptation of the guidelines from SQuAD 1.0 ([Rajpurkar, Pranav et al. (2016)](http://arxiv.org/abs/1606.05250)). In total, 6282 pairs of a question and an extracted fragment that contains the answer were created.
For compatibility with similar datasets in other languages, we followed as close as possible existing curation guidelines. We also created [another QA dataset with wikipedia](https://huggingface.co/datasets/projecte-aina/viquiquad) to ensure thematic and stylistic variety.
#### Who are the source language producers?
CA
Professional journalists from the Catalan newspaper [VilaWeb](https://www.vilaweb.cat/).
### Annotations
#### Annotation process
We comissioned the creation of 1 to 5 questions for each context, following an adaptation of the guidelines from SQuAD 1.0 ([Rajpurkar, Pranav et al. (2016)](http://arxiv.org/abs/1606.05250)).
#### Who are the annotators?
Annotation was commissioned to an specialized company that hired a team of native language speakers.
### Personal and Sensitive Information
No personal or sensitive information included.
## Considerations for Using the Data
### Social Impact of Dataset
We hope this dataset contributes to the development of language models in Catalan, a low-resource language.
### Discussion of Biases
[N/A]
### Other Known Limitations
[N/A]
## Additional Information
### Dataset Curators
Text Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@bsc.es)
This work was funded by the [Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya](https://politiquesdigitals.gencat.cat/en/inici/index.html) within the framework of [Projecte AINA](https://politiquesdigitals.gencat.cat/ca/economia/catalonia-ai/aina/).
### Licensing Information
This work is licensed under a <a rel="license" href="https://creativecommons.org/licenses/by-sa/4.0/">Attribution-ShareAlike 4.0 International License</a>.
### Citation Information
```
@inproceedings{armengol-estape-etal-2021-multilingual,
title = "Are Multilingual Models the Best Choice for Moderately Under-resourced Languages? {A} Comprehensive Assessment for {C}atalan",
author = "Armengol-Estap{\'e}, Jordi and
Carrino, Casimiro Pio and
Rodriguez-Penagos, Carlos and
de Gibert Bonet, Ona and
Armentano-Oller, Carme and
Gonzalez-Agirre, Aitor and
Melero, Maite and
Villegas, Marta",
booktitle = "Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.findings-acl.437",
doi = "10.18653/v1/2021.findings-acl.437",
pages = "4933--4946",
}
```
[DOI](https://doi.org/10.5281/zenodo.4562337)
### Contributions
[N/A] | [
-0.4844214916229248,
-0.7851583361625671,
0.10370670258998871,
0.46587538719177246,
-0.23286713659763336,
0.18869224190711975,
-0.11039204150438309,
-0.38244888186454773,
0.6089984178543091,
0.4617518186569214,
-0.5450010895729065,
-0.7772039175033569,
-0.4501159191131592,
0.28612235188484... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
s3h/customized-qalb | s3h | 2021-12-06T18:03:24Z | 12 | 0 | null | [
"region:us"
] | 2021-12-06T18:03:24Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
sebastian-hofstaetter/tripclick-training | sebastian-hofstaetter | 2022-07-26T13:16:46Z | 12 | 1 | null | [
"task_categories:text-retrieval",
"task_ids:document-retrieval",
"annotations_creators:other",
"annotations_creators:clicks",
"language_creators:other",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:tripclick",
"license:apache-2.0",
"arxiv:2201.00365",
"region:us"
] | 2022-07-26T13:16:46Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | ---
annotations_creators:
- other
- clicks
language_creators:
- other
language:
- en-US
license:
- apache-2.0
multilinguality:
- monolingual
pretty_name: tripclick-training
size_categories:
- unknown
source_datasets: [tripclick]
task_categories:
- text-retrieval
task_ids:
- document-retrieval
---
# TripClick Baselines with Improved Training Data
*Establishing Strong Baselines for TripClick Health Retrieval* Sebastian Hofstätter, Sophia Althammer, Mete Sertkan and Allan Hanbury
https://arxiv.org/abs/2201.00365
**tl;dr** We create strong re-ranking and dense retrieval baselines (BERT<sub>CAT</sub>, BERT<sub>DOT</sub>, ColBERT, and TK) for TripClick (health ad-hoc retrieval). We improve the – originally too noisy – training data with a simple negative sampling policy. We achieve large gains over BM25 in the re-ranking and retrieval setting on TripClick, which were not achieved with the original baselines. We publish the improved training files for everyone to use.
If you have any questions, suggestions, or want to collaborate please don't hesitate to get in contact with us via [Twitter](https://twitter.com/s_hofstaetter) or mail to s.hofstaetter@tuwien.ac.at
**Please cite our work as:**
````
@misc{hofstaetter2022tripclick,
title={Establishing Strong Baselines for TripClick Health Retrieval},
author={Sebastian Hofst{\"a}tter and Sophia Althammer and Mete Sertkan and Allan Hanbury},
year={2022},
eprint={2201.00365},
archivePrefix={arXiv},
primaryClass={cs.IR}
}
````
## Published Training Files
We publish the improved training files without the text content instead using the ids from TripClick (with permission from the TripClick owners); for the text content please get the full TripClick dataset from [the TripClick Github page](https://github.com/tripdatabase/tripclick).
Our training file **improved_tripclick_train_triple-ids.tsv** has the format ``query_id pos_passage_id neg_passage_id`` (with tab separation).
----
For more information on how to use the training files see: https://github.com/sebastian-hofstaetter/tripclick | [
0.1042734831571579,
-0.3400488793849945,
0.4092942178249359,
0.03844578191637993,
-0.18580670654773712,
0.06497880071401596,
-0.1701272428035736,
-0.24096591770648956,
0.44867268204689026,
0.36840373277664185,
-0.18856973946094513,
-0.787422776222229,
-0.36785227060317993,
-0.0482049137353... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
shivam/marathi_samanantar_processed | shivam | 2022-01-28T18:20:23Z | 12 | 0 | null | [
"region:us"
] | 2022-01-28T18:20:23Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
shivmoha/squad_adversarial_manual | shivmoha | 2021-12-17T11:22:45Z | 12 | 0 | null | [
"region:us"
] | 2021-12-17T11:22:45Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
sshleifer/pseudo_bart_xsum | sshleifer | 2021-02-23T13:57:51Z | 12 | 0 | null | [
"region:us"
] | 2021-02-23T13:57:51Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | ## Extreme Summarization (XSum) Dataset.
There are two features:
- document: Input news article.
- summary: One sentence summary of the article.
### Citation
```bibtex
@article{Narayan2018DontGM,
title={Don't Give Me the Details, Just the Summary! Topic-Aware Convolutional Neural Networks for Extreme Summarization},
author={Shashi Narayan and Shay B. Cohen and Mirella Lapata},
journal={ArXiv},
year={2018},
volume={abs/1808.08745}
}
```
| [
-0.4276190400123596,
-0.7789697051048279,
0.17594987154006958,
0.17154957354068756,
-0.629169225692749,
-0.01492670550942421,
-0.14355148375034332,
-0.11293698847293854,
0.5344565510749817,
0.4003095030784607,
-0.5042560696601868,
-0.46596962213516235,
-0.6524983644485474,
-0.1767009049654... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
stjokerli/TextToText_boolq | stjokerli | 2022-02-28T10:59:36Z | 12 | 0 | null | [
"region:us"
] | 2022-02-28T10:59:36Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
stjokerli/TextToText_boolq_seqio | stjokerli | 2022-03-03T06:33:45Z | 12 | 0 | null | [
"region:us"
] | 2022-03-03T06:33:45Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | Entry not found | [
-0.3227649927139282,
-0.225684255361557,
0.862226128578186,
0.43461498618125916,
-0.5282987952232361,
0.7012963891029358,
0.7915717363357544,
0.07618629932403564,
0.7746025919914246,
0.2563219666481018,
-0.7852816581726074,
-0.2257382869720459,
-0.9104480743408203,
0.5715669393539429,
-0... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
stjokerli/TextToText_mnli_seqio | stjokerli | 2022-03-03T06:27:48Z | 12 | 0 | null | [
"region:us"
] | 2022-03-03T06:27:48Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | Entry not found | [
-0.3227649927139282,
-0.225684255361557,
0.862226128578186,
0.43461498618125916,
-0.5282987952232361,
0.7012963891029358,
0.7915717363357544,
0.07618629932403564,
0.7746025919914246,
0.2563219666481018,
-0.7852816581726074,
-0.2257382869720459,
-0.9104480743408203,
0.5715669393539429,
-0... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
stjokerli/TextToText_rte | stjokerli | 2022-02-28T10:59:03Z | 12 | 0 | null | [
"region:us"
] | 2022-02-28T10:59:03Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | Entry not found | [
-0.3227649927139282,
-0.225684255361557,
0.862226128578186,
0.43461498618125916,
-0.5282987952232361,
0.7012963891029358,
0.7915717363357544,
0.07618629932403564,
0.7746025919914246,
0.2563219666481018,
-0.7852816581726074,
-0.2257382869720459,
-0.9104480743408203,
0.5715669393539429,
-0... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
svanhvit/iceErrorCorpus | svanhvit | 2021-10-17T21:41:00Z | 12 | 0 | null | [
"region:us"
] | 2021-10-17T21:41:00Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | Entry not found | [
-0.3227649927139282,
-0.225684255361557,
0.862226128578186,
0.43461498618125916,
-0.5282987952232361,
0.7012963891029358,
0.7915717363357544,
0.07618629932403564,
0.7746025919914246,
0.2563219666481018,
-0.7852816581726074,
-0.2257382869720459,
-0.9104480743408203,
0.5715669393539429,
-0... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
tesemnikov-av/toxic_dataset_ner | tesemnikov-av | 2022-02-08T11:52:12Z | 12 | 0 | null | [
"region:us"
] | 2022-02-08T11:52:12Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | Entry not found | [
-0.3227649927139282,
-0.225684255361557,
0.862226128578186,
0.43461498618125916,
-0.5282987952232361,
0.7012963891029358,
0.7915717363357544,
0.07618629932403564,
0.7746025919914246,
0.2563219666481018,
-0.7852816581726074,
-0.2257382869720459,
-0.9104480743408203,
0.5715669393539429,
-0... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
teven/all_wikipedia_passages | teven | 2022-01-09T00:13:42Z | 12 | 0 | null | [
"region:us"
] | 2022-01-09T00:13:42Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | Entry not found | [
-0.3227649927139282,
-0.225684255361557,
0.862226128578186,
0.43461498618125916,
-0.5282987952232361,
0.7012963891029358,
0.7915717363357544,
0.07618629932403564,
0.7746025919914246,
0.2563219666481018,
-0.7852816581726074,
-0.2257382869720459,
-0.9104480743408203,
0.5715669393539429,
-0... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
teven/github_all_lang_filtered | teven | 2022-02-05T00:20:33Z | 12 | 0 | null | [
"region:us"
] | 2022-02-05T00:20:33Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | Entry not found | [
-0.3227649927139282,
-0.225684255361557,
0.862226128578186,
0.43461498618125916,
-0.5282987952232361,
0.7012963891029358,
0.7915717363357544,
0.07618629932403564,
0.7746025919914246,
0.2563219666481018,
-0.7852816581726074,
-0.2257382869720459,
-0.9104480743408203,
0.5715669393539429,
-0... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
teven/mpww | teven | 2022-01-08T23:25:02Z | 12 | 0 | null | [
"region:us"
] | 2022-01-08T23:25:02Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | Entry not found | [
-0.3227649927139282,
-0.225684255361557,
0.862226128578186,
0.43461498618125916,
-0.5282987952232361,
0.7012963891029358,
0.7915717363357544,
0.07618629932403564,
0.7746025919914246,
0.2563219666481018,
-0.7852816581726074,
-0.2257382869720459,
-0.9104480743408203,
0.5715669393539429,
-0... | null | null | null | null | null | null | null | null | null | null | null | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.