id stringlengths 2 115 | author stringlengths 2 42 ⌀ | last_modified timestamp[us, tz=UTC] | downloads int64 0 8.87M | likes int64 0 3.84k | paperswithcode_id stringlengths 2 45 ⌀ | tags list | lastModified timestamp[us, tz=UTC] | createdAt stringlengths 24 24 | key stringclasses 1 value | created timestamp[us] | card stringlengths 1 1.01M | embedding list | library_name stringclasses 21 values | pipeline_tag stringclasses 27 values | mask_token null | card_data null | widget_data null | model_index null | config null | transformers_info null | spaces null | safetensors null | transformersInfo null | modelId stringlengths 5 111 ⌀ | embeddings list |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
itopcu/hate-speech-target | itopcu | 2023-10-30T20:55:39Z | 24 | 0 | null | [
"task_categories:text-classification",
"size_categories:10K<n<100K",
"language:tr",
"code",
"region:us"
] | 2023-10-30T20:55:39Z | 2023-10-30T20:47:22.000Z | 2023-10-30T20:47:22 | ---
task_categories:
- text-classification
language:
- tr
tags:
- code
pretty_name: hate speech target detection dataset
size_categories:
- 10K<n<100K
---
https://coltekin.github.io/offensive-turkish/guidelines-tr.html | [
-0.4267733693122864,
-0.6695030331611633,
0.07962292432785034,
0.28559550642967224,
-0.9196814894676208,
-0.6556304693222046,
-0.16408087313175201,
-0.5006667375564575,
0.3587987422943115,
0.8056169152259827,
-0.5735695362091064,
-1.150847315788269,
-0.3033651113510132,
0.22221927344799042... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
yashnbx/indic-gretil-dump | yashnbx | 2023-10-31T10:25:58Z | 24 | 0 | null | [
"region:us"
] | 2023-10-31T10:25:58Z | 2023-10-31T10:25:14.000Z | 2023-10-31T10:25:14 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: title
dtype: string
- name: level
dtype: string
- name: url
dtype: string
- name: f_level
dtype: string
- name: name
dtype: string
- name: people
dtype: string
- name: gpt-descriptions
dtype: string
- name: page_size
dtype: float64
- name: page_content_type
dtype: string
- name: content
dtype: string
splits:
- name: train
num_bytes: 618940758
num_examples: 1035
download_size: 254899614
dataset_size: 618940758
---
# Dataset Card for "indic-gretil-dump"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.808981716632843,
-0.17825110256671906,
-0.10651857405900955,
0.219613179564476,
-0.22793444991111755,
-0.08517739176750183,
0.1312015801668167,
-0.15459178388118744,
0.8373743295669556,
0.6862239241600037,
-0.7458817958831787,
-0.8109672665596008,
-0.6584493517875671,
-0.144442021846771... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
NathanGavenski/MountainCar-v0 | NathanGavenski | 2023-11-02T16:03:38Z | 24 | 1 | null | [
"size_categories:10M<n<100M",
"license:mit",
"Imitation Learning",
"Expert Trajectory",
"region:us"
] | 2023-11-02T16:03:38Z | 2023-11-02T16:00:28.000Z | 2023-11-02T16:00:28 | ---
license: mit
tags:
- Imitation Learning
- Expert Trajectory
pretty_name: MountainCar-v0 Expert Dataset
size_categories:
- 10M<n<100M
---
# MountainCar-v0 - Imitation Learning Datasets
This is a dataset created by [Imitation Learning Datasets](https://github.com/NathanGavenski/IL-Datasets) project.
It was created by using Stable Baselines weights from a DQN policy from [HuggingFace](https://huggingface.co/sb3/dqn-MountainCar-v0).
## Description
The dataset consists of 1,000 episodes with an average episodic reward of `-98.817`.
Each entry consists of:
```
obs (list): observation with length 2.
action (int): action (0 or 1).
reward (float): reward point for that timestep.
episode_returns (bool): if that state was the initial timestep for an episode.
```
## Usage
Feel free to download and use the `teacher.jsonl` dataset as you please.
If you are interested in using our PyTorch Dataset implementation, feel free to check the [IL Datasets](https://github.com/NathanGavenski/IL-Datasets/blob/main/src/imitation_datasets/dataset/dataset.py) project.
There, we implement a base Dataset that downloads this dataset and all other datasets directly from HuggingFace.
The Baseline Dataset also allows for more control over train and test splits and how many episodes you want to use (in cases where the 1k episodes are not necessary).
## Citation
Coming soon. | [
-0.4177131652832031,
-0.2208765596151352,
-0.022785423323512077,
0.28235235810279846,
-0.254172146320343,
-0.20801693201065063,
0.05156942456960678,
-0.12429122626781464,
0.4319247305393219,
0.4021902084350586,
-0.8214499950408936,
-0.5237623453140259,
-0.4128762185573578,
-0.1773554235696... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
gonglinyuan/mbpp_with_prompt | gonglinyuan | 2023-11-02T21:31:54Z | 24 | 0 | null | [
"license:cc-by-4.0",
"region:us"
] | 2023-11-02T21:31:54Z | 2023-11-02T21:31:20.000Z | 2023-11-02T21:31:20 | ---
license: cc-by-4.0
---
| [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ostapeno/qa-openai_batched_icl0_clen512_maxD-1_maxC2500_0_cleaned_5000 | ostapeno | 2023-11-03T00:10:21Z | 24 | 0 | null | [
"region:us"
] | 2023-11-03T00:10:21Z | 2023-11-03T00:10:10.000Z | 2023-11-03T00:10:10 | ---
configs:
- config_name: default
data_files:
- split: abstract_algebra
path: data/abstract_algebra-*
- split: college_biology
path: data/college_biology-*
- split: formal_logic
path: data/formal_logic-*
- split: global_facts
path: data/global_facts-*
- split: high_school_government_and_politics
path: data/high_school_government_and_politics-*
- split: high_school_physics
path: data/high_school_physics-*
- split: machine_learning
path: data/machine_learning-*
- split: prehistory
path: data/prehistory-*
- split: security_studies
path: data/security_studies-*
- split: sociology
path: data/sociology-*
dataset_info:
features:
- name: id
dtype: string
- name: context
dtype: string
- name: docno
dtype: string
- name: subject
dtype: string
- name: icl_examples
dtype: 'null'
- name: author_instr
dtype: string
- name: instruction
dtype: string
- name: response
dtype: string
splits:
- name: abstract_algebra
num_bytes: 14122630
num_examples: 5000
- name: college_biology
num_bytes: 14635930
num_examples: 5000
- name: formal_logic
num_bytes: 15427206
num_examples: 5000
- name: global_facts
num_bytes: 14445463
num_examples: 5000
- name: high_school_government_and_politics
num_bytes: 15183703
num_examples: 5000
- name: high_school_physics
num_bytes: 15619770
num_examples: 5000
- name: machine_learning
num_bytes: 15563263
num_examples: 5000
- name: prehistory
num_bytes: 15668853
num_examples: 5000
- name: security_studies
num_bytes: 15139669
num_examples: 5000
- name: sociology
num_bytes: 14238107
num_examples: 5000
download_size: 21518891
dataset_size: 150044594
---
# Dataset Card for "qa-openai_batched_icl0_clen512_maxD-1_maxC2500_0_cleaned_5000"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5828277468681335,
0.0625329464673996,
0.052915796637535095,
0.26146265864372253,
-0.38706013560295105,
-0.3088403344154358,
0.1746988147497177,
-0.01404124591499567,
0.6000560522079468,
0.6452704668045044,
-0.7451239824295044,
-0.7334586381912231,
-0.33786511421203613,
-0.00296946102753... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
eunbinni/ola_polyglot_5.8B_t2_data | eunbinni | 2023-11-03T17:28:51Z | 24 | 0 | null | [
"region:us"
] | 2023-11-03T17:28:51Z | 2023-11-03T17:28:47.000Z | 2023-11-03T17:28:47 | ---
dataset_info:
features:
- name: input
dtype: string
- name: instruction
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 29998082
num_examples: 107174
download_size: 18601058
dataset_size: 29998082
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "ola_polyglot_5.8B_t2_data"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5280656218528748,
-0.36023300886154175,
0.21415302157402039,
0.13973353803157806,
-0.40344101190567017,
-0.03388219699263573,
0.3382875621318817,
-0.42837509512901306,
0.7297059893608093,
0.5207189917564392,
-0.3975384533405304,
-0.850872814655304,
-0.6022729873657227,
-0.41967722773551... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
rchiang/mergedAgentInstruct | rchiang | 2023-11-03T22:18:41Z | 24 | 0 | null | [
"license:apache-2.0",
"region:us"
] | 2023-11-03T22:18:41Z | 2023-11-03T21:53:51.000Z | 2023-11-03T21:53:51 | ---
license: apache-2.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: conversations
list:
- name: from
dtype: string
- name: loss
dtype: bool
- name: value
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 8042511
num_examples: 1866
download_size: 0
dataset_size: 8042511
---
direct copy of [AgentInstruct](https://huggingface.co/datasets/THUDM/AgentInstruct) for use in Axolotl. | [
0.13327130675315857,
-0.14313054084777832,
0.10553023964166641,
0.6318323016166687,
-0.033766161650419235,
0.10576682537794113,
0.48972615599632263,
-0.4174385666847229,
0.6166744232177734,
0.9103996157646179,
-0.6001931428909302,
-0.5964807271957397,
0.20535285770893097,
0.173508077859878... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
vietgpt/basic | vietgpt | 2023-11-03T22:35:29Z | 24 | 0 | null | [
"region:us"
] | 2023-11-03T22:35:29Z | 2023-11-03T22:35:26.000Z | 2023-11-03T22:35:26 | ---
dataset_info:
features:
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 3024
num_examples: 13
download_size: 3151
dataset_size: 3024
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "basic"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6495829820632935,
-0.4594378173351288,
0.22343645989894867,
0.14711841940879822,
-0.22397872805595398,
-0.04187171906232834,
0.2719663679599762,
-0.13917942345142365,
0.8631096482276917,
0.5490695238113403,
-0.9097766876220703,
-0.8367718458175659,
-0.6034184098243713,
-0.18889327347278... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
mikehemberger/darcai-life-on-earth | mikehemberger | 2023-11-21T16:15:02Z | 24 | 0 | null | [
"task_categories:zero-shot-classification",
"task_categories:translation",
"task_categories:summarization",
"task_categories:conversational",
"task_categories:feature-extraction",
"task_categories:sentence-similarity",
"size_categories:100K<n<1M",
"language:en",
"language:de",
"license:mit",
"bi... | 2023-11-21T16:15:02Z | 2023-11-04T15:21:45.000Z | 2023-11-04T15:21:45 | ---
size_categories:
- 100K<n<1M
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': life-on-earth-s01-e01-the-infinite-varirty
'1': life-on-earth-s01-e02-building-bodies
'2': life-on-earth-s01-e03-the-first-forests
'3': life-on-earth-s01-e04-the-swarming-hordes
'4': life-on-earth-s01-e05-conquest-of-the-waters
'5': life-on-earth-s01-e06-invasion-of-the-land
'6': life-on-earth-s01-e07-victors-of-the-dry-land
'7': life-on-earth-s01-e08-lords-of-the-air
'8': life-on-earth-s01-e09-the-rise-of-the-mammals
'9': life-on-earth-s01-e10-themes-and-variations
'10': life-on-earth-s01-e11-the-hunters-and-the-hunted
'11': life-on-earth-s01-e12-life-in-the-trees
'12': life-on-earth-s01-e13-the-compulsive-communicators
- name: file_name
dtype: string
- name: show_name
dtype: string
- name: relative_path
dtype: string
- name: caption
dtype: string
splits:
- name: train
num_bytes: 1072256363
num_examples: 84323
download_size: 1059335669
dataset_size: 1072256363
license: mit
task_categories:
- zero-shot-classification
- translation
- summarization
- conversational
- feature-extraction
- sentence-similarity
language:
- en
- de
tags:
- biology
- climate
- code
---
# Dataset Card for "life-on-earth"
## Dataset Description
- **Homepage-Videos:** [https://archive.org/download/WildlifeDocumentaries]()
- **Homepage-Dataset:** [https://huggingface.co/datasets/mikehemberger/darcai-life-on-earth]()
- **Repository:** [https://github.com/mikehemberger/darc-ai]()
- **Point of Contact:** [mikehemberger@gmail.com]()
### Dataset Summary
The David Attenborough Research Consortium (DARC) loves David Attenborough (DA). And therefore we aim to enrich his fantastic work using modern deep learning, generative artificial intelligence (AI) methods and most recent assistants like ChatGPT. Those results, together with extracted and time stamped image frames ("frame_00000_hh-mm-ss.msmsms.jpg", ...) from videos constitutes the darcai-life-on-earth dataset.
As a first enrichment, we include text captions generated by the huggingface "Salesforce/blip2-opt-2.7b" model for >84K image frames as a ready-to-go dataset.
Furthermore our [https://huggingface.co/datasets/mikehemberger/darcai-life-on-earth](github-repo-page) includes ViT image embeddings (dim=768) and caption-txt embeddings (using openAIs "text-embedding-ada-002" model, dim=1536) for all >84K images.
### Languages
Native english mostly. Some german. Hopefully many more soon.
## Dataset Structure
### Data Instances
{
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=622x360>,
'label': 0,
'file_name': 'frame_00000_00-00-00.000.jpg',
'show_name': 'life-on-earth-s01-e01-the-infinite-varirty',
'relative_path': 'images/life-on-earth/life-on-earth-s01-e01-the-infinite-varirty',
'caption': 'a black background with a white clock on it'
}
### Data Fields
- image: a PIL image frame extracted from video (decode=True)
- label: One of [0-12] according to 13 episodes
- file_name: file name of the PIL image
- show_name: name of the show and episode from which the images were extracted
- relative_path: where to find the images
- caption: text caption for the image generated by huggingface transformers blip2 model ("Salesforce/blip2-opt-2.7b")
- ## Dataset Creation | [
-0.669987142086029,
-0.43395575881004333,
0.06397522985935211,
0.13363173604011536,
-0.5057139992713928,
0.07340626418590546,
-0.03621188551187515,
-0.48488959670066833,
0.41090092062950134,
0.3039838969707489,
-0.9634631276130676,
-0.5320128202438354,
-0.7224492430686951,
0.15787072479724... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
aditijha/instruct_v3_5k_and_lima | aditijha | 2023-11-05T03:28:31Z | 24 | 0 | null | [
"region:us"
] | 2023-11-05T03:28:31Z | 2023-11-05T03:28:29.000Z | 2023-11-05T03:28:29 | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: response
dtype: string
- name: source
dtype: string
splits:
- name: train
num_bytes: 22803070
num_examples: 6000
download_size: 13069762
dataset_size: 22803070
---
# Dataset Card for "instruct_v3_5k_and_lima"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.47140368819236755,
-0.08492579311132431,
0.3965863883495331,
0.5452255606651306,
-0.3326716721057892,
-0.20344866812229156,
0.5811906456947327,
-0.4243707060813904,
0.7572810053825378,
0.7324178814888,
-0.7542842626571655,
-0.8279442191123962,
-0.6938474774360657,
-0.02015179581940174,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
hippocrates/medical_meadow_mediqa_train | hippocrates | 2023-11-06T18:42:12Z | 24 | 0 | null | [
"region:us"
] | 2023-11-06T18:42:12Z | 2023-11-06T18:42:10.000Z | 2023-11-06T18:42:10 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: id
dtype: string
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 30570668
num_examples: 2208
download_size: 12800020
dataset_size: 30570668
---
# Dataset Card for "medical_meadow_mediqa_train"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.17632198333740234,
0.059011541306972504,
0.33375856280326843,
0.2340981364250183,
0.055741116404533386,
-0.0774131789803505,
0.497991144657135,
-0.14887206256389618,
0.7947208285331726,
0.41923803091049194,
-1.0561542510986328,
-0.8511866927146912,
-0.5687644481658936,
-0.39959549903869... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Javiparmu/ui-icons | Javiparmu | 2023-11-14T20:16:43Z | 24 | 0 | null | [
"license:gpl-3.0",
"region:us"
] | 2023-11-14T20:16:43Z | 2023-11-07T23:02:33.000Z | 2023-11-07T23:02:33 | ---
license: gpl-3.0
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 23916866.0
num_examples: 996
download_size: 21285499
dataset_size: 23916866.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
gg-ai/es-0712-no-demoji-m | gg-ai | 2023-11-08T16:54:39Z | 24 | 0 | null | [
"region:us"
] | 2023-11-08T16:54:39Z | 2023-11-08T16:54:31.000Z | 2023-11-08T16:54:31 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: val
path: data/val-*
dataset_info:
features:
- name: text
dtype: string
- name: clean_text
dtype: string
- name: sent
dtype: int64
splits:
- name: train
num_bytes: 5850039
num_examples: 16256
- name: test
num_bytes: 1177134
num_examples: 3252
- name: val
num_bytes: 297532
num_examples: 813
download_size: 4682068
dataset_size: 7324705
---
# Dataset Card for "es-0712-no-demoji-m"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.4394376277923584,
-0.055810313671827316,
0.24349813163280487,
0.16606295108795166,
-0.3755134642124176,
-0.19823341071605682,
0.03190821781754494,
0.05458992347121239,
1.1264091730117798,
0.6036430597305298,
-1.1312532424926758,
-1.0072497129440308,
-0.5610135793685913,
0.11615999788045... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Sheape/heart-arrhythmias-symptoms | Sheape | 2023-11-09T06:20:16Z | 24 | 0 | null | [
"region:us"
] | 2023-11-09T06:20:16Z | 2023-11-09T03:04:27.000Z | 2023-11-09T03:04:27 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
BENBENBENb/ARC1000COT | BENBENBENb | 2023-11-10T00:14:29Z | 24 | 0 | null | [
"task_categories:question-answering",
"language:en",
"region:us"
] | 2023-11-10T00:14:29Z | 2023-11-10T00:12:51.000Z | 2023-11-10T00:12:51 | ---
task_categories:
- question-answering
language:
- en
--- | [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
habanoz/airoboros-3.1-no-mathjson-max-1k | habanoz | 2023-11-11T13:24:48Z | 24 | 0 | null | [
"region:us"
] | 2023-11-11T13:24:48Z | 2023-11-11T13:21:11.000Z | 2023-11-11T13:21:11 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: id
dtype: string
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: category
dtype: string
splits:
- name: train
num_bytes: 40852711.20890598
num_examples: 20180
download_size: 6394016
dataset_size: 40852711.20890598
---
# Dataset Card for "airoboros-3.1-no-mathjson-max-1k"
This is a modified version of 'jondurbin/airoboros-3.1' dataset:
- mathjson instances excluded
- Length of input+ouput+special_tokens is limited to 1024 tokens. (llama chat format is assumed)
| [
-0.5871449708938599,
-0.4056474566459656,
-0.16546717286109924,
0.23743923008441925,
-0.5088458061218262,
-0.004755455069243908,
0.10279577225446701,
-0.30960002541542053,
0.8506045341491699,
0.8619143962860107,
-0.6349236965179443,
-0.4347709119319916,
-0.7851518392562866,
0.4076820909976... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
vilm/viet-pretrained-002 | vilm | 2023-11-15T04:24:21Z | 24 | 0 | null | [
"region:us"
] | 2023-11-15T04:24:21Z | 2023-11-15T04:22:27.000Z | 2023-11-15T04:22:27 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 4502549932
num_examples: 258823
download_size: 2324002719
dataset_size: 4502549932
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "viet-pretrained-002"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.3548663258552551,
-0.11107183247804642,
0.29952800273895264,
0.22376635670661926,
-0.34057798981666565,
-0.12373701483011246,
0.3766022324562073,
0.03116535395383835,
0.7246348261833191,
0.6079107522964478,
-0.9010978937149048,
-0.8119989037513733,
-0.6357681155204773,
-0.21574006974697... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
YuhoLiang/CVPR2023 | YuhoLiang | 2023-11-15T06:36:44Z | 24 | 0 | null | [
"size_categories:1K<n<10K",
"region:us"
] | 2023-11-15T06:36:44Z | 2023-11-15T06:11:32.000Z | 2023-11-15T06:11:32 | ---
size_categories:
- 1K<n<10K
--- | [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
miguel-kjh/Geography_books_dataset | miguel-kjh | 2023-11-15T16:34:24Z | 24 | 0 | null | [
"size_categories:10M<n<100M",
"language:en",
"license:mit",
"region:us"
] | 2023-11-15T16:34:24Z | 2023-11-15T12:54:17.000Z | 2023-11-15T12:54:17 | ---
license: mit
size_categories:
- 10M<n<100M
language:
- en
pretty_name: Geo text
---
This dataset is a sub-set of 'The Project Gutenberg' that only focuses on Geography text.
Books: 11M of tokens
- The 1990 CIA World Factbook
- Commercial Geography
- Influences of Geographic Environment
- Geographical etymology: a dictionary of place-names giving their derivations
- Geography and Plays
- Physical Geography | [
-0.44037315249443054,
-0.48125264048576355,
0.3203265070915222,
-0.1439855843782425,
-0.1167040690779686,
0.05900087580084801,
-0.21622663736343384,
-0.2573433518409729,
0.3619915544986725,
1.0103342533111572,
-0.7881225943565369,
-0.9407179951667786,
-0.5042005777359009,
-0.07463533431291... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
zaursamedov1/test | zaursamedov1 | 2023-11-15T18:19:46Z | 24 | 0 | null | [
"region:us"
] | 2023-11-15T18:19:46Z | 2023-11-15T18:17:52.000Z | 2023-11-15T18:17:52 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
kpriyanshu256/semeval-task-8-b-v2-mistral-7b | kpriyanshu256 | 2023-11-15T18:29:49Z | 24 | 0 | null | [
"region:us"
] | 2023-11-15T18:29:49Z | 2023-11-15T18:29:44.000Z | 2023-11-15T18:29:44 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: val
path: data/val-*
- split: test
path: data/test-*
dataset_info:
features:
- name: text
dtype: string
- name: model
dtype: string
- name: source
dtype: string
- name: label
dtype: int64
- name: id
dtype: int64
- name: mistral-7b_estimated_loss
dtype: float64
- name: mistral-7b_mean_lowest25
dtype: float64
- name: mistral-7b_mean_highest25
dtype: float64
- name: mistral-7b_max
dtype: float64
- name: mistral-7b_min
dtype: float64
- name: mistral-7b_range
dtype: float64
- name: mistral-7b_mean
dtype: float64
- name: mistral-7b_std
dtype: float64
- name: mistral-7b_entropy
dtype: float64
- name: mistral-7b_kurtosis
dtype: float64
- name: mistral-7b_skewness
dtype: float64
- name: mistral-7b_perplexity
dtype: float64
splits:
- name: train
num_bytes: 127022360
num_examples: 56821
- name: val
num_bytes: 31364223
num_examples: 14206
- name: test
num_bytes: 5102312
num_examples: 3000
download_size: 96394782
dataset_size: 163488895
---
# Dataset Card for "semeval-task-8-b-v2-mistral-7b"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.3058379292488098,
-0.24697570502758026,
0.22659696638584137,
0.46010589599609375,
-0.36182525753974915,
-0.39615198969841003,
0.4277597963809967,
-0.0665222629904747,
0.6896442770957947,
0.7241239547729492,
-0.8152219653129578,
-0.5414648056030273,
-0.8098257184028625,
-0.22988606989383... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Bhabya-28/Hostage_Data | Bhabya-28 | 2023-11-16T11:09:46Z | 24 | 0 | null | [
"region:us"
] | 2023-11-16T11:09:46Z | 2023-11-16T11:09:20.000Z | 2023-11-16T11:09:20 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
yuyijiong/Sharegpt-long-conversation | yuyijiong | 2023-11-18T06:15:26Z | 24 | 1 | null | [
"language:zh",
"language:en",
"license:cc-by-nc-4.0",
"region:us"
] | 2023-11-18T06:15:26Z | 2023-11-18T06:09:35.000Z | 2023-11-18T06:09:35 | ---
license: cc-by-nc-4.0
language:
- zh
- en
---
* 从[sharegpt-38k](https://huggingface.co/datasets/shibing624/sharegpt_gpt4)和[sharegpt-90k](RyokoAI/ShareGPT52K)数据集中筛选的长对话,长度大于8k字(英文大于8k个word,中文大于8k个汉字)
* 已经转化为chatml对话格式 | [
-0.3844155967235565,
-0.6823815107345581,
0.08449666202068329,
0.7229359745979309,
-0.5154359340667725,
-0.16910123825073242,
-0.3841387629508972,
-0.46407076716423035,
0.43478187918663025,
0.35387277603149414,
-0.5702407956123352,
-0.6414528489112854,
-1.0528613328933716,
0.16695144772529... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
thanaphatt1/LongAlpaca-16kcontext-enth-and-WikiQA | thanaphatt1 | 2023-11-18T10:30:21Z | 24 | 0 | null | [
"region:us"
] | 2023-11-18T10:30:21Z | 2023-11-18T08:07:10.000Z | 2023-11-18T08:07:10 | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
- name: file
dtype: string
splits:
- name: train
num_bytes: 1175520496.8399894
num_examples: 23801
download_size: 413457371
dataset_size: 1175520496.8399894
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
| [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
victorzarzu/interior-design-editing-prompts | victorzarzu | 2023-11-18T20:59:37Z | 24 | 0 | null | [
"region:us"
] | 2023-11-18T20:59:37Z | 2023-11-18T09:11:39.000Z | 2023-11-18T09:11:39 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 469332
num_examples: 8833
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
--- | [
-0.12853369116783142,
-0.18616779148578644,
0.6529126167297363,
0.49436280131340027,
-0.193193256855011,
0.2360745668411255,
0.36071979999542236,
0.05056314915418625,
0.5793651342391968,
0.740013837814331,
-0.6508103013038635,
-0.23783960938453674,
-0.7102248668670654,
-0.04782580211758613... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
bdsaglam/web_nlg-erx-instruction-llama2chat | bdsaglam | 2023-11-25T14:07:57Z | 24 | 0 | null | [
"region:us"
] | 2023-11-25T14:07:57Z | 2023-11-19T11:49:47.000Z | 2023-11-19T11:49:47 | ---
dataset_info:
features:
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
splits:
- name: train
num_bytes: 24859305
num_examples: 35426
- name: dev
num_bytes: 3128262
num_examples: 4464
download_size: 2815320
dataset_size: 27987567
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: dev
path: data/dev-*
---
| [
-0.12853369116783142,
-0.18616779148578644,
0.6529126167297363,
0.49436280131340027,
-0.193193256855011,
0.2360745668411255,
0.36071979999542236,
0.05056314915418625,
0.5793651342391968,
0.740013837814331,
-0.6508103013038635,
-0.23783960938453674,
-0.7102248668670654,
-0.04782580211758613... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
eligrayy/OE-LoL-Esports-Dataset | eligrayy | 2023-11-19T22:36:07Z | 24 | 0 | null | [
"license:apache-2.0",
"region:us"
] | 2023-11-19T22:36:07Z | 2023-11-19T22:28:59.000Z | 2023-11-19T22:28:59 | ---
license: apache-2.0
---
| [
-0.12853369116783142,
-0.18616779148578644,
0.6529126167297363,
0.49436280131340027,
-0.193193256855011,
0.2360745668411255,
0.36071979999542236,
0.05056314915418625,
0.5793651342391968,
0.740013837814331,
-0.6508103013038635,
-0.23783960938453674,
-0.7102248668670654,
-0.04782580211758613... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
sall6550/db1 | sall6550 | 2023-11-25T16:55:43Z | 24 | 0 | null | [
"region:us"
] | 2023-11-25T16:55:43Z | 2023-11-22T12:21:32.000Z | 2023-11-22T12:21:32 | ---
dataset_info:
features:
- name: input
dtype: string
- name: instruction
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 68866
num_examples: 82
download_size: 27539
dataset_size: 68866
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
| [
-0.12853369116783142,
-0.18616779148578644,
0.6529126167297363,
0.49436280131340027,
-0.193193256855011,
0.2360745668411255,
0.36071979999542236,
0.05056314915418625,
0.5793651342391968,
0.740013837814331,
-0.6508103013038635,
-0.23783960938453674,
-0.7102248668670654,
-0.04782580211758613... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
lnwang/retrieval_qa | lnwang | 2023-11-28T07:08:13Z | 24 | 4 | null | [
"size_categories:n<1K",
"language:en",
"language:zh",
"license:apache-2.0",
"art",
"region:us"
] | 2023-11-28T07:08:13Z | 2023-11-24T03:26:11.000Z | 2023-11-24T03:26:11 | ---
language:
- en
- zh
license: apache-2.0
size_categories:
- n<1K
dataset_info:
- config_name: default
features:
- name: region
dtype: string
- name: doc
dtype: string
- name: query
dtype: string
- name: choices
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 231728
num_examples: 196
download_size: 115496
dataset_size: 231728
- config_name: en
features:
- name: region
dtype: string
- name: doc
dtype: string
- name: query
dtype: string
- name: choices
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 231728
num_examples: 196
download_size: 115496
dataset_size: 231728
- config_name: zh_cn
features:
- name: region
dtype: string
- name: doc
dtype: string
- name: query
dtype: string
- name: choices
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 199143
num_examples: 196
download_size: 115136
dataset_size: 199143
- config_name: zh_tw
features:
- name: region
dtype: string
- name: doc
dtype: string
- name: query
dtype: string
- name: choices
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 199632
num_examples: 196
download_size: 113408
dataset_size: 199632
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
- config_name: en
data_files:
- split: test
path: en/test-*
- config_name: zh_cn
data_files:
- split: test
path: zh_cn/test-*
- config_name: zh_tw
data_files:
- split: test
path: zh_tw/test-*
tags:
- art
---
# Retrieval_QA: A Simple Multilingual Benchmark For Retrieval Encoder Models
<!-- Provide a quick summary of the dataset. -->
The purpose of this dataset is to provide a simple and easy-to-use benchmark for retrieval encoder models, which helps researchers quickly select the most effective retrieval encoder for text extraction and achieve optimal results in subsequent retrieval tasks such as retrieval-augmented-generation (RAG). The dataset contains multiple document-question pairs, where each document is a short text about the history, culture, or other information of a country or region, and each question is a query relevant to the content of the corresponding document.
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
Users may select a retrieval encoder model to encode each document and query into corresponding embeddings, and then use vector matching methods such as FAISS to identify the most relevant documents for each query as regression results."
+ **Curated by**: <a href='https://wln20.github.io'>Luning Wang</a>
+ **Language(s)**: English, Chinese(Simplified, Traditional)
+ **License**: Apache-2.0
### Dataset Sources
<!-- Provide the basic links for the dataset. -->
- **Repository:** https://github.com/wln20/Retrieval_QA
- **Paper:** TBD
- **Demo:** TBD
## Uses
The dataset is available on 🤗 Huggingface, you can conveniently use it in python with 🤗 Datasets:
```python
from datasets import load_dataset
dataset_en = load_dataset('lnwang/retrieval_qa', name='en')
# dataset_zh_cn = load_dataset('lnwang/retrieval_qa', name='zh_cn')
# dataset_zh_tw = load_dataset('lnwang/retrieval_qa', name='zh_tw')
```
Now we support three languages: English(en), Simplified-Chinese(zh_cn), Traditional-Chinese(zh_tw). You can specify the `name` argument in `load_dataset()` to get the corresponding subset.
For more usages, please follow the examples in the github repository of this project.
## Dataset Creation
The raw data was generated by GPT-3.5-turbo, using carefully designed prompts by human. The data was also cleaned to remove controversial information. | [
-0.32022127509117126,
-0.4689423143863678,
0.26112717390060425,
0.2826874554157257,
-0.3342309594154358,
-0.1692238748073578,
-0.09910737723112106,
-0.3174302875995636,
0.1403455287218094,
0.26642686128616333,
-0.3411656618118286,
-0.7016225457191467,
-0.1724518984556198,
0.353232920169830... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
jsonifize/alpaca-cot-collection-jsonifize | jsonifize | 2023-11-24T13:59:27Z | 24 | 0 | null | [
"region:us"
] | 2023-11-24T13:59:27Z | 2023-11-24T13:57:49.000Z | 2023-11-24T13:57:49 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
jsonifize/alpaca-cot-collection_stringified-jsonifize | jsonifize | 2023-11-24T14:04:03Z | 24 | 0 | null | [
"region:us"
] | 2023-11-24T14:04:03Z | 2023-11-24T14:02:39.000Z | 2023-11-24T14:02:39 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
jsonifize/EverythingLM-data-V3_stringified-jsonifize | jsonifize | 2023-11-24T14:05:22Z | 24 | 0 | null | [
"region:us"
] | 2023-11-24T14:05:22Z | 2023-11-24T14:05:21.000Z | 2023-11-24T14:05:21 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
jsonifize/Feedback-Collection_stringified-jsonifize | jsonifize | 2023-11-24T14:05:36Z | 24 | 0 | null | [
"region:us"
] | 2023-11-24T14:05:36Z | 2023-11-24T14:05:23.000Z | 2023-11-24T14:05:23 | Entry not found | [
-0.3227649927139282,
-0.225684255361557,
0.862226128578186,
0.43461498618125916,
-0.5282987952232361,
0.7012963891029358,
0.7915717363357544,
0.07618629932403564,
0.7746025919914246,
0.2563219666481018,
-0.7852816581726074,
-0.2257382869720459,
-0.9104480743408203,
0.5715669393539429,
-0... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
jsonifize/full-hh-rlhf_stringified-jsonifize | jsonifize | 2023-11-24T14:05:48Z | 24 | 0 | null | [
"region:us"
] | 2023-11-24T14:05:48Z | 2023-11-24T14:05:37.000Z | 2023-11-24T14:05:37 | Entry not found | [
-0.3227649927139282,
-0.225684255361557,
0.862226128578186,
0.43461498618125916,
-0.5282987952232361,
0.7012963891029358,
0.7915717363357544,
0.07618629932403564,
0.7746025919914246,
0.2563219666481018,
-0.7852816581726074,
-0.2257382869720459,
-0.9104480743408203,
0.5715669393539429,
-0... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
jsonifize/hh-rlhf_stringified-jsonifize | jsonifize | 2023-11-24T14:06:10Z | 24 | 0 | null | [
"region:us"
] | 2023-11-24T14:06:10Z | 2023-11-24T14:05:55.000Z | 2023-11-24T14:05:55 | Entry not found | [
-0.3227649927139282,
-0.225684255361557,
0.862226128578186,
0.43461498618125916,
-0.5282987952232361,
0.7012963891029358,
0.7915717363357544,
0.07618629932403564,
0.7746025919914246,
0.2563219666481018,
-0.7852816581726074,
-0.2257382869720459,
-0.9104480743408203,
0.5715669393539429,
-0... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
jsonifize/know-saraswati-alpaca-cot-by-knowrohit_stringified-jsonifize | jsonifize | 2023-11-24T14:06:18Z | 24 | 0 | null | [
"region:us"
] | 2023-11-24T14:06:18Z | 2023-11-24T14:06:11.000Z | 2023-11-24T14:06:11 | Entry not found | [
-0.3227649927139282,
-0.225684255361557,
0.862226128578186,
0.43461498618125916,
-0.5282987952232361,
0.7012963891029358,
0.7915717363357544,
0.07618629932403564,
0.7746025919914246,
0.2563219666481018,
-0.7852816581726074,
-0.2257382869720459,
-0.9104480743408203,
0.5715669393539429,
-0... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
jsonifize/LogicInference_OA_stringified-jsonifize | jsonifize | 2023-11-24T14:06:22Z | 24 | 0 | null | [
"region:us"
] | 2023-11-24T14:06:22Z | 2023-11-24T14:06:19.000Z | 2023-11-24T14:06:19 | Entry not found | [
-0.3227649927139282,
-0.225684255361557,
0.862226128578186,
0.43461498618125916,
-0.5282987952232361,
0.7012963891029358,
0.7915717363357544,
0.07618629932403564,
0.7746025919914246,
0.2563219666481018,
-0.7852816581726074,
-0.2257382869720459,
-0.9104480743408203,
0.5715669393539429,
-0... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
jsonifize/openhermes_stringified-jsonifize | jsonifize | 2023-11-24T14:06:36Z | 24 | 0 | null | [
"region:us"
] | 2023-11-24T14:06:36Z | 2023-11-24T14:06:23.000Z | 2023-11-24T14:06:23 | Entry not found | [
-0.3227649927139282,
-0.225684255361557,
0.862226128578186,
0.43461498618125916,
-0.5282987952232361,
0.7012963891029358,
0.7915717363357544,
0.07618629932403564,
0.7746025919914246,
0.2563219666481018,
-0.7852816581726074,
-0.2257382869720459,
-0.9104480743408203,
0.5715669393539429,
-0... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
6alesso9/crime_data | 6alesso9 | 2023-11-25T15:05:43Z | 24 | 0 | null | [
"license:apache-2.0",
"region:us"
] | 2023-11-25T15:05:43Z | 2023-11-25T15:03:02.000Z | 2023-11-25T15:03:02 | ---
license: apache-2.0
---
| [
-0.12853367626667023,
-0.18616794049739838,
0.6529126763343811,
0.4943627417087555,
-0.19319313764572144,
0.23607443273067474,
0.36071979999542236,
0.05056338757276535,
0.5793654322624207,
0.7400138974189758,
-0.6508103013038635,
-0.23783987760543823,
-0.710224986076355,
-0.047825977206230... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
OrdalieTech/Ordalie-FR-Reranking-benchmark | OrdalieTech | 2023-11-26T16:36:40Z | 24 | 0 | null | [
"region:us"
] | 2023-11-26T16:36:40Z | 2023-11-25T22:40:12.000Z | 2023-11-25T22:40:12 | ---
dataset_info:
features:
- name: query
dtype: string
- name: positive
sequence: string
- name: negative
sequence: string
splits:
- name: test
num_bytes: 22164217
num_examples: 1961
download_size: 11999345
dataset_size: 22164217
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
| [
-0.12853367626667023,
-0.18616794049739838,
0.6529126763343811,
0.4943627417087555,
-0.19319313764572144,
0.23607443273067474,
0.36071979999542236,
0.05056338757276535,
0.5793654322624207,
0.7400138974189758,
-0.6508103013038635,
-0.23783987760543823,
-0.710224986076355,
-0.047825977206230... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
mnoukhov/openai_summarize_comparisons_tldrprompt | mnoukhov | 2023-11-27T07:03:20Z | 24 | 0 | null | [
"region:us"
] | 2023-11-27T07:03:20Z | 2023-11-27T06:57:39.000Z | 2023-11-27T06:57:39 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: valid1
path: data/valid1-*
- split: valid2
path: data/valid2-*
dataset_info:
features:
- name: prompt
dtype: string
- name: chosen
dtype: string
- name: rejected
dtype: string
splits:
- name: train
num_bytes: 157425966
num_examples: 92534
- name: test
num_bytes: 143018505
num_examples: 83629
- name: valid1
num_bytes: 56686271
num_examples: 33082
- name: valid2
num_bytes: 86396487
num_examples: 50715
download_size: 0
dataset_size: 443527229
---
# Dataset Card for "openai_summarize_comparisons_tldrprompt"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5392696857452393,
0.015761300921440125,
-0.02683466300368309,
0.1582464575767517,
-0.2607809603214264,
-0.1616269201040268,
0.06530722975730896,
-0.18313193321228027,
0.7946944832801819,
0.34304943680763245,
-0.5272072553634644,
-0.7613081336021423,
-0.5958765149116516,
-0.1610877811908... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
mtlew/0001_Angry_test | mtlew | 2022-02-18T08:12:41Z | 23 | 0 | null | [
"region:us"
] | 2022-02-18T08:12:41Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | Entry not found | [
-0.32276487350463867,
-0.22568444907665253,
0.8622263073921204,
0.43461570143699646,
-0.5282988548278809,
0.7012969255447388,
0.7915717363357544,
0.07618642598390579,
0.7746027112007141,
0.25632190704345703,
-0.7852815389633179,
-0.22573848068714142,
-0.910447895526886,
0.5715675354003906,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
zapsdcn/hyperpartisan_news | zapsdcn | 2021-12-08T20:17:10Z | 23 | 0 | null | [
"region:us"
] | 2021-12-08T20:17:10Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | Entry not found | [
-0.32276487350463867,
-0.22568444907665253,
0.8622263073921204,
0.43461570143699646,
-0.5282988548278809,
0.7012969255447388,
0.7915717363357544,
0.07618642598390579,
0.7746027112007141,
0.25632190704345703,
-0.7852815389633179,
-0.22573848068714142,
-0.910447895526886,
0.5715675354003906,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ruanchaves/bt11 | ruanchaves | 2022-10-20T19:13:02Z | 23 | 0 | null | [
"annotations_creators:expert-generated",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:original",
"language:code",
"license:unknown",
"word-segmentation",
"region:us"
] | 2022-10-20T19:13:02Z | 2022-03-05T22:41:34.000Z | 2022-03-05T22:41:34 | ---
annotations_creators:
- expert-generated
language_creators:
- machine-generated
language:
- code
license:
- unknown
multilinguality:
- monolingual
size_categories:
- unknown
source_datasets:
- original
task_categories:
- structure-prediction
task_ids: []
pretty_name: BT11
tags:
- word-segmentation
---
# Dataset Card for BT11
## Dataset Description
- **Paper:** [Helpful or Not? An investigation on the feasibility of identifier splitting via CNN-BiLSTM-CRF](https://ksiresearch.org/seke/seke18paper/seke18paper_167.pdf)
### Dataset Summary
In programming languages, identifiers are tokens (also called symbols) which name language entities.
Some of the kinds of entities an identifier might denote include variables, types, labels, subroutines, and packages.
BT11 is a dataset for identifier segmentation, i.e. the task of adding spaces between the words on a identifier.
### Languages
- Java
## Dataset Structure
### Data Instances
```
{
"index": 20170,
"identifier": "currentLineHighlight",
"segmentation": "current Line Highlight"
}
```
### Data Fields
- `index`: a numerical index.
- `identifier`: the original identifier.
- `segmentation`: the gold segmentation for the identifier.
## Dataset Creation
- All hashtag segmentation and identifier splitting datasets on this profile have the same basic fields: `hashtag` and `segmentation` or `identifier` and `segmentation`.
- The only difference between `hashtag` and `segmentation` or between `identifier` and `segmentation` are the whitespace characters. Spell checking, expanding abbreviations or correcting characters to uppercase go into other fields.
- There is always whitespace between an alphanumeric character and a sequence of any special characters ( such as `_` , `:`, `~` ).
- If there are any annotations for named entity recognition and other token classification tasks, they are given in a `spans` field.
## Additional Information
### Citation Information
```
@inproceedings{butler2011improving,
title={Improving the tokenisation of identifier names},
author={Butler, Simon and Wermelinger, Michel and Yu, Yijun and Sharp, Helen},
booktitle={European Conference on Object-Oriented Programming},
pages={130--154},
year={2011},
organization={Springer}
}
```
### Contributions
This dataset was added by [@ruanchaves](https://github.com/ruanchaves) while developing the [hashformers](https://github.com/ruanchaves/hashformers) library. | [
-0.671744167804718,
-0.6125376224517822,
0.11992461234331131,
0.16431264579296112,
-0.4688793420791626,
0.22568632662296295,
-0.11684870719909668,
-0.5602858662605286,
0.054019127041101456,
0.21446150541305542,
-0.5137059092521667,
-0.7336965203285217,
-0.6310333609580994,
0.13195085525512... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ruanchaves/jhotdraw | ruanchaves | 2022-10-20T19:12:53Z | 23 | 0 | null | [
"annotations_creators:expert-generated",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:original",
"language:code",
"license:unknown",
"word-segmentation",
"region:us"
] | 2022-10-20T19:12:53Z | 2022-03-05T23:13:37.000Z | 2022-03-05T23:13:37 | ---
annotations_creators:
- expert-generated
language_creators:
- machine-generated
language:
- code
license:
- unknown
multilinguality:
- monolingual
size_categories:
- unknown
source_datasets:
- original
task_categories:
- structure-prediction
task_ids: []
pretty_name: Jhotdraw
tags:
- word-segmentation
---
# Dataset Card for Jhotdraw
## Dataset Description
- **Paper:** [Helpful or Not? An investigation on the feasibility of identifier splitting via CNN-BiLSTM-CRF](https://ksiresearch.org/seke/seke18paper/seke18paper_167.pdf)
### Dataset Summary
In programming languages, identifiers are tokens (also called symbols) which name language entities.
Some of the kinds of entities an identifier might denote include variables, types, labels, subroutines, and packages.
Jhotdraw is a dataset for identifier segmentation, i.e. the task of adding spaces between the words on a identifier.
### Languages
- Java
## Dataset Structure
### Data Instances
```
{
"index": 0,
"identifier": "abstractconnectorserializeddataversion",
"segmentation": "abstract connector serialized data version"
}
```
### Data Fields
- `index`: a numerical index.
- `identifier`: the original identifier.
- `segmentation`: the gold segmentation for the identifier.
## Dataset Creation
- All hashtag segmentation and identifier splitting datasets on this profile have the same basic fields: `hashtag` and `segmentation` or `identifier` and `segmentation`.
- The only difference between `hashtag` and `segmentation` or between `identifier` and `segmentation` are the whitespace characters. Spell checking, expanding abbreviations or correcting characters to uppercase go into other fields.
- There is always whitespace between an alphanumeric character and a sequence of any special characters ( such as `_` , `:`, `~` ).
- If there are any annotations for named entity recognition and other token classification tasks, they are given in a `spans` field.
## Additional Information
### Citation Information
```
@inproceedings{madani2010recognizing,
title={Recognizing words from source code identifiers using speech recognition techniques},
author={Madani, Nioosha and Guerrouj, Latifa and Di Penta, Massimiliano and Gueheneuc, Yann-Gael and Antoniol, Giuliano},
booktitle={2010 14th European Conference on Software Maintenance and Reengineering},
pages={68--77},
year={2010},
organization={IEEE}
}
```
### Contributions
This dataset was added by [@ruanchaves](https://github.com/ruanchaves) while developing the [hashformers](https://github.com/ruanchaves/hashformers) library. | [
-0.502277135848999,
-0.5693398714065552,
0.069700226187706,
0.12698307633399963,
-0.4561682641506195,
0.3803849518299103,
-0.20347389578819275,
-0.49140220880508423,
0.18657533824443817,
0.26119908690452576,
-0.5770845413208008,
-0.7854230403900146,
-0.5950689315795898,
0.02149732597172260... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
h4iku/coconut_python2010 | h4iku | 2023-09-28T23:17:32Z | 23 | 0 | null | [
"code",
"region:us"
] | 2023-09-28T23:17:32Z | 2022-03-30T01:03:32.000Z | 2022-03-30T01:03:32 | ---
tags:
- code
pretty_name: CoCoNuT-Python(2010)
---
# Dataset Card for CoCoNuT-Python(2010)
## Dataset Description
- **Homepage:** [CoCoNuT training data](https://github.com/lin-tan/CoCoNut-Artifact/releases/tag/training_data_1.0.0)
- **Repository:** [CoCoNuT repository](https://github.com/lin-tan/CoCoNut-Artifact)
- **Paper:** [CoCoNuT: Combining Context-Aware Neural Translation Models using Ensemble for Program Repair](https://dl.acm.org/doi/abs/10.1145/3395363.3397369)
### Dataset Summary
Part of the data used to train the models in the "CoCoNuT: Combining Context-Aware Neural Translation Models using Ensemble for Program Repair" paper.
These datasets contain raw data extracted from GitHub, GitLab, and Bitbucket, and have neither been shuffled nor tokenized.
The year in the dataset’s name is the cutting year that shows the year of the newest commit in the dataset.
### Languages
- Python
## Dataset Structure
### Data Fields
The dataset consists of 4 columns: `add`, `rem`, `context`, and `meta`.
These match the original dataset files: `add.txt`, `rem.txt`, `context.txt`, and `meta.txt`.
### Data Instances
There is a mapping between the 4 columns for each instance.
For example:
5 first rows of `rem` (i.e., the buggy line/hunk):
```
1 public synchronized StringBuffer append(char ch)
2 ensureCapacity_unsynchronized(count + 1); value[count++] = ch; return this;
3 public String substring(int beginIndex, int endIndex)
4 if (beginIndex < 0 || endIndex > count || beginIndex > endIndex) throw new StringIndexOutOfBoundsException(); if (beginIndex == 0 && endIndex == count) return this; int len = endIndex - beginIndex; return new String(value, beginIndex + offset, len, (len << 2) >= value.length);
5 public Object next() {
```
5 first rows of add (i.e., the fixed line/hunk):
```
1 public StringBuffer append(Object obj)
2 return append(obj == null ? "null" : obj.toString());
3 public String substring(int begin)
4 return substring(begin, count);
5 public FSEntry next() {
```
These map to the 5 instances:
```diff
- public synchronized StringBuffer append(char ch)
+ public StringBuffer append(Object obj)
```
```diff
- ensureCapacity_unsynchronized(count + 1); value[count++] = ch; return this;
+ return append(obj == null ? "null" : obj.toString());
```
```diff
- public String substring(int beginIndex, int endIndex)
+ public String substring(int begin)
```
```diff
- if (beginIndex < 0 || endIndex > count || beginIndex > endIndex) throw new StringIndexOutOfBoundsException(); if (beginIndex == 0 && endIndex == count) return this; int len = endIndex - beginIndex; return new String(value, beginIndex + offset, len, (len << 2) >= value.length);
+ return substring(begin, count);
```
```diff
- public Object next() {
+ public FSEntry next() {
```
`context` contains the associated "context". Context is the (in-lined) buggy function (including the buggy lines and comments).
For example, the context of
```
public synchronized StringBuffer append(char ch)
```
is its associated function:
```java
public synchronized StringBuffer append(char ch) { ensureCapacity_unsynchronized(count + 1); value[count++] = ch; return this; }
```
`meta` contains some metadata about the project:
```
1056 /local/tlutelli/issta_data/temp/all_java0context/java/2006_temp/2006/1056/68a6301301378680519f2b146daec37812a1bc22/StringBuffer.java/buggy/core/src/classpath/java/java/lang/StringBuffer.java
```
`1056` is the project id. `/local/...` is the absolute path to the buggy file. This can be parsed to extract the commit id: `68a6301301378680519f2b146daec37812a1bc22`, the file name: `StringBuffer.java` and the original path within the project
`core/src/classpath/java/java/lang/StringBuffer.java`
| Number of projects | Number of Instances |
| ------------------ |-------------------- |
| 13,899 | 480,777 |
## Dataset Creation
### Curation Rationale
Data is collected to train automated program repair (APR) models.
### Citation Information
```bib
@inproceedings{lutellierCoCoNuTCombiningContextaware2020,
title = {{{CoCoNuT}}: Combining Context-Aware Neural Translation Models Using Ensemble for Program Repair},
shorttitle = {{{CoCoNuT}}},
booktitle = {Proceedings of the 29th {{ACM SIGSOFT International Symposium}} on {{Software Testing}} and {{Analysis}}},
author = {Lutellier, Thibaud and Pham, Hung Viet and Pang, Lawrence and Li, Yitong and Wei, Moshi and Tan, Lin},
year = {2020},
month = jul,
series = {{{ISSTA}} 2020},
pages = {101--114},
publisher = {{Association for Computing Machinery}},
address = {{New York, NY, USA}},
doi = {10.1145/3395363.3397369},
url = {https://doi.org/10.1145/3395363.3397369},
urldate = {2022-12-06},
isbn = {978-1-4503-8008-9},
keywords = {AI and Software Engineering,Automated program repair,Deep Learning,Neural Machine Translation}
}
```
| [
-0.3754023611545563,
-0.7230914235115051,
0.1960383653640747,
0.17579509317874908,
-0.3662126958370209,
0.16426412761211395,
-0.2975408732891083,
-0.5018488168716431,
0.24371954798698425,
0.30483105778694153,
-0.4597831070423126,
-0.527748167514801,
-0.47213229537010193,
0.2750970125198364... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
marksverdhei/reddit-syac-urls | marksverdhei | 2022-06-07T13:43:15Z | 23 | 0 | null | [
"region:us"
] | 2022-06-07T13:43:15Z | 2022-03-30T22:32:30.000Z | 2022-03-30T22:32:30 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
UrukHan/t5-russian-summarization | UrukHan | 2022-04-02T18:07:55Z | 23 | 2 | null | [
"region:us"
] | 2022-04-02T18:07:55Z | 2022-04-02T18:07:04.000Z | 2022-04-02T18:07:04 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
NbAiLab/NST | NbAiLab | 2022-08-12T14:09:29Z | 23 | 0 | null | [
"license:apache-2.0",
"region:us"
] | 2022-08-12T14:09:29Z | 2022-04-20T12:06:56.000Z | 2022-04-20T12:06:56 | ---
license: apache-2.0
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
crystina-z/mmarco-passage | crystina-z | 2022-05-13T21:32:59Z | 23 | 0 | null | [
"region:us"
] | 2022-05-13T21:32:59Z | 2022-05-06T07:14:38.000Z | 2022-05-06T07:14:38 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
peandrew/conceptnet_en_nomalized | peandrew | 2022-05-08T03:11:02Z | 23 | 1 | null | [
"region:us"
] | 2022-05-08T03:11:02Z | 2022-05-08T01:47:33.000Z | 2022-05-08T01:47:33 | This is the English part of the ConceptNet and we have removed the useless information. | [
-0.2414526641368866,
-0.7588168978691101,
0.18685618042945862,
-0.21483886241912842,
-0.610282301902771,
-0.29116734862327576,
0.22746655344963074,
-0.4758046865463257,
0.9544272422790527,
0.854945719242096,
-0.7329572439193726,
-0.33294451236724854,
-0.12320289760828018,
-0.19034326076507... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
strombergnlp/rustance | strombergnlp | 2022-10-25T21:46:32Z | 23 | 1 | rustance | [
"task_categories:text-classification",
"task_ids:fact-checking",
"task_ids:sentiment-classification",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:n<1K",
"source_datasets:original",
"language:ru",
"license:cc-by-4.0",
"stance... | 2022-10-25T21:46:32Z | 2022-05-09T08:53:27.000Z | 2022-05-09T08:53:27 | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- ru
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- n<1K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- fact-checking
- sentiment-classification
paperswithcode_id: rustance
pretty_name: RuStance
tags:
- stance-detection
---
# Dataset Card for "rustance"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://figshare.com/articles/dataset/dataset_csv/7151906](https://figshare.com/articles/dataset/dataset_csv/7151906)
- **Repository:** [https://github.com/StrombergNLP/rustance](https://github.com/StrombergNLP/rustance)
- **Paper:** [https://link.springer.com/chapter/10.1007/978-3-030-14687-0_16](https://link.springer.com/chapter/10.1007/978-3-030-14687-0_16), [https://arxiv.org/abs/1809.01574](https://arxiv.org/abs/1809.01574)
- **Point of Contact:** [Leon Derczynski](https://github.com/leondz)
- **Size of downloaded dataset files:** 212.54 KiB
- **Size of the generated dataset:** 186.76 KiB
- **Total amount of disk used:** 399.30KiB
### Dataset Summary
This is a stance prediction dataset in Russian. The dataset contains comments on news articles,
and rows are a comment, the title of the news article it responds to, and the stance of the comment
towards the article.
Stance detection is a critical component of rumour and fake news identification. It involves the extraction of the stance a particular author takes related to a given claim, both expressed in text. This paper investigates stance classification for Russian. It introduces a new dataset, RuStance, of Russian tweets and news comments from multiple sources, covering multiple stories, as well as text classification approaches to stance detection as benchmarks over this data in this language. As well as presenting this openly-available dataset, the first of its kind for Russian, the paper presents a baseline for stance prediction in the language.
### Supported Tasks and Leaderboards
* Stance Detection: [Stance Detection on RuStance](https://paperswithcode.com/sota/stance-detection-on-rustance)
### Languages
Russian, as spoken on the Meduza website (i.e. from multiple countries) (`bcp47:ru`)
## Dataset Structure
### Data Instances
#### rustance
- **Size of downloaded dataset files:** 349.79 KiB
- **Size of the generated dataset:** 366.11 KiB
- **Total amount of disk used:** 715.90 KiB
An example of 'train' looks as follows.
```
{
'id': '0',
'text': 'Волки, волки!!',
'title': 'Минобороны обвинило «гражданского сотрудника» в публикации скриншота из игры вместо фото террористов. И показало новое «неоспоримое подтверждение»',
'stance': 3
}
```
### Data Fields
- `id`: a `string` feature.
- `text`: a `string` expressing a stance.
- `title`: a `string` of the target/topic annotated here.
- `stance`: a class label representing the stance the text expresses towards the target. Full tagset with indices:
```
0: "support",
1: "deny",
2: "query",
3: "comment",
```
### Data Splits
| name |train|
|---------|----:|
|rustance|958 sentences|
## Dataset Creation
### Curation Rationale
Toy data for training and especially evaluating stance prediction in Russian
### Source Data
#### Initial Data Collection and Normalization
The data is comments scraped from a Russian news site not situated in Russia, [Meduza](https://meduza.io/), in 2018.
#### Who are the source language producers?
Russian speakers including from the Russian diaspora, especially Latvia
### Annotations
#### Annotation process
Annotators labelled comments for supporting, denying, querying or just commenting on a news article.
#### Who are the annotators?
Russian native speakers, IT education, male, 20s.
### Personal and Sensitive Information
The data was public at the time of collection. No PII removal has been performed.
## Considerations for Using the Data
### Social Impact of Dataset
There's a risk of misinformative content being in this data. The data has NOT been vetted for any content.
### Discussion of Biases
### Other Known Limitations
The above limitations apply.
## Additional Information
### Dataset Curators
The dataset is curated by the paper's authors.
### Licensing Information
The authors distribute this data under Creative Commons attribution license, CC-BY 4.0.
### Citation Information
```
@inproceedings{lozhnikov2018stance,
title={Stance prediction for russian: data and analysis},
author={Lozhnikov, Nikita and Derczynski, Leon and Mazzara, Manuel},
booktitle={International Conference in Software Engineering for Defence Applications},
pages={176--186},
year={2018},
organization={Springer}
}
```
### Contributions
Author-added dataset [@leondz](https://github.com/leondz)
| [
-0.20909973978996277,
-0.39465755224227905,
0.3358038067817688,
-0.046442706137895584,
-0.38803285360336304,
0.09831870347261429,
-0.16893813014030457,
-0.2721593976020813,
0.3602326214313507,
0.0424635224044323,
-0.5786551833152771,
-1.2013517618179321,
-0.7432693839073181,
-0.22213369607... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
vadis/sv-ident | vadis | 2022-11-07T20:51:06Z | 23 | 1 | sv-ident | [
"task_categories:text-classification",
"task_ids:multi-label-classification",
"task_ids:semantic-similarity-classification",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:multilingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"languag... | 2022-11-07T20:51:06Z | 2022-06-17T08:33:04.000Z | 2022-06-17T08:33:04 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
- de
license:
- mit
multilinguality:
- multilingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- multi-label-classification
- semantic-similarity-classification
pretty_name: SV-Ident
paperswithcode_id: sv-ident
---
# Dataset Card for SV-Ident
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://vadis-project.github.io/sv-ident-sdp2022/
- **Repository:** https://github.com/vadis-project/sv-ident
- **Paper:** [Needs More Information]
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** svident2022@googlegroups.com
### Dataset Summary
SV-Ident comprises 4,248 sentences from social science publications in English and German. The data is the official data for the Shared Task: “Survey Variable Identification in Social Science Publications” (SV-Ident) 2022. Visit the homepage to find out more details about the shared task.
### Supported Tasks and Leaderboards
The dataset supports:
- **Variable Detection**: identifying whether a sentence contains a variable mention or not.
- **Variable Disambiguation**: identifying which variable from a given vocabulary is mentioned in a sentence. **NOTE**: for this task, you will need to also download the variable metadata from [here](https://bit.ly/3Nuvqdu).
### Languages
The text in the dataset is in English and German, as written by researchers. The domain of the texts is scientific publications in the social sciences.
## Dataset Structure
### Data Instances
```
{
"sentence": "Our point, however, is that so long as downward (favorable comparisons overwhelm the potential for unfavorable comparisons, system justification should be a likely outcome amongst the disadvantaged.",
"is_variable": 1,
"variable": ["exploredata-ZA5400_VarV66", "exploredata-ZA5400_VarV53"],
"research_data": ["ZA5400"],
"doc_id": "73106",
"uuid": "b9fbb80f-3492-4b42-b9d5-0254cc33ac10",
"lang": "en",
}
```
### Data Fields
The following data fields are provided for documents:
```
`sentence`: Textual instance, which may contain a variable mention.<br />
`is_variable`: Label, whether the textual instance contains a variable mention (1) or not (0). This column can be used for Task 1 (Variable Detection).<br />
`variable`: Variables (separated by a comma ";") that are mentioned in the textual instance. This column can be used for Task 2 (Variable Disambiguation). Variables with the "unk" tag could not be mapped to a unique variable.<br />
`research_data`: Research data IDs (separated by a ";") that are relevant for each instance (and in general for each "doc_id").<br />
`doc_id`: ID of the source document. Each document is written in one language (either English or German).<br />
`uuid`: Unique ID of the instance in uuid4 format.<br />
`lang`: Language of the sentence.
```
The language for each document can be found in the document-language mapping file [here](https://github.com/vadis-project/sv-ident/blob/main/data/train/document_languages.json), which maps `doc_id` to a language code (`en`, `de`). The variables metadata (i.e., the vocabulary) can be downloaded from this [link](https://bit.ly/3Nuvqdu). Note, that each `research_data` contains hundreds of variables (these can be understood as the corpus of documents to choose the most relevant from). If the variable has an "unk" tag, it means that the sentence contains a variable that has not been disambiguated. Such sentences could be used for Task 1 and filtered out for Task 2. The metadata file has the following format:
```
{
"research_data_id_1": {
"variable_id_1": VARIABLE_METADATA,
...
"variable_id_n": VARIABLE_METADATA,
},
...
"research_data_id_n": {...},
}
```
Each variable may contain all (or some) of the following values:
```
study_title: The title of the research data study.
variable_label: The label of the variable.
variable_name: The name of the variable.
question_text: The question of the variable in the original language.
question_text_en: The question of the variable in English.
sub_question: The sub-question of the variable.
item_categories: The item categories of the variable.
answer_categories: The answers of the variable.
topic: The topics of the variable in the original language.
topic_en: The topics of the variable in English.
```
### Data Splits
| Split | Number of sentences |
| ------------------- | ------------------------------------ |
| Train | 3,823 |
| Validation | 425 |
## Dataset Creation
### Curation Rationale
The dataset was curated by the VADIS project (https://vadis-project.github.io/).
The documents were annotated by two expert annotators.
### Source Data
#### Initial Data Collection and Normalization
The original data are available at GESIS (https://www.gesis.org/home) in an unprocessed format.
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
The documents were annotated by two expert annotators.
### Personal and Sensitive Information
The dataset does not include personal or sensitive information.
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
VADIS project (https://vadis-project.github.io/)
### Licensing Information
All documents originate from the Social Science Open Access Repository (SSOAR) and are licensed accordingly. The original document URLs are provided in [document_urls.json](https://github.com/vadis-project/sv-ident/blob/main/data/train/document_urlsjson). For more information on licensing, please refer to the terms and conditions on the [SSAOR Grant of Licenses page](https://www.gesis.org/en/ssoar/home/information/grant-of-licences).
### Citation Information
```
@inproceedings{tsereteli-etal-2022-overview,
title = "Overview of the {SV}-Ident 2022 Shared Task on Survey Variable Identification in Social Science Publications",
author = "Tsereteli, Tornike and
Kartal, Yavuz Selim and
Ponzetto, Simone Paolo and
Zielinski, Andrea and
Eckert, Kai and
Mayr, Philipp",
booktitle = "Proceedings of the Third Workshop on Scholarly Document Processing",
month = oct,
year = "2022",
address = "Gyeongju, Republic of Korea",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.sdp-1.29",
pages = "229--246",
abstract = "In this paper, we provide an overview of the SV-Ident shared task as part of the 3rd Workshop on Scholarly Document Processing (SDP) at COLING 2022. In the shared task, participants were provided with a sentence and a vocabulary of variables, and asked to identify which variables, if any, are mentioned in individual sentences from scholarly documents in full text. Two teams made a total of 9 submissions to the shared task leaderboard. While none of the teams improve on the baseline systems, we still draw insights from their submissions. Furthermore, we provide a detailed evaluation. Data and baselines for our shared task are freely available at \url{https://github.com/vadis-project/sv-ident}.",
}
```
### Contributions
[Needs More Information] | [
-0.2813505232334137,
-0.4550255835056305,
0.35599222779273987,
0.17191150784492493,
-0.4231654405593872,
0.11663515865802765,
-0.32022595405578613,
-0.2417224943637848,
0.4211484491825104,
0.4838084876537323,
-0.6828925013542175,
-0.9745655655860901,
-0.5465320944786072,
0.1964860856533050... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ScandEval/scala-fo | ScandEval | 2023-07-05T09:48:02Z | 23 | 0 | null | [
"task_categories:text-classification",
"size_categories:1K<n<10K",
"language:fo",
"license:cc-by-sa-4.0",
"region:us"
] | 2023-07-05T09:48:02Z | 2022-06-27T15:15:35.000Z | 2022-06-27T15:15:35 | ---
license: cc-by-sa-4.0
task_categories:
- text-classification
language:
- fo
size_categories:
- 1K<n<10K
--- | [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
MicPie/unpredictable_5k | MicPie | 2022-08-04T19:36:03Z | 23 | 0 | null | [
"task_categories:multiple-choice",
"task_categories:question-answering",
"task_categories:zero-shot-classification",
"task_categories:text2text-generation",
"task_categories:table-question-answering",
"task_categories:text-generation",
"task_categories:text-classification",
"task_categories:tabular-cl... | 2022-08-04T19:36:03Z | 2022-07-06T18:51:40.000Z | 2022-07-06T18:51:40 | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- en
license:
- apache-2.0
multilinguality:
- monolingual
pretty_name: UnpredicTable-5k
size_categories:
- 100K<n<1M
source_datasets: []
task_categories:
- multiple-choice
- question-answering
- zero-shot-classification
- text2text-generation
- table-question-answering
- text-generation
- text-classification
- tabular-classification
task_ids:
- multiple-choice-qa
- extractive-qa
- open-domain-qa
- closed-domain-qa
- closed-book-qa
- open-book-qa
- language-modeling
- multi-class-classification
- natural-language-inference
- topic-classification
- multi-label-classification
- tabular-multi-class-classification
- tabular-multi-label-classification
---
# Dataset Card for "UnpredicTable-5k" - Dataset of Few-shot Tasks from Tables
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://ethanperez.net/unpredictable
- **Repository:** https://github.com/JunShern/few-shot-adaptation
- **Paper:** Few-shot Adaptation Works with UnpredicTable Data
- **Point of Contact:** junshern@nyu.edu, perez@nyu.edu
### Dataset Summary
The UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.
There are several dataset versions available:
* [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full): Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full), which comprises 413,299 tasks from 23,744 unique websites.
* [UnpredicTable-unique](https://huggingface.co/datasets/MicPie/unpredictable_unique): This is the same as [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full) but filtered to have a maximum of one task per website. [UnpredicTable-unique](https://huggingface.co/datasets/MicPie/unpredictable_unique) contains exactly 23,744 tasks from 23,744 websites.
* [UnpredicTable-5k](https://huggingface.co/datasets/MicPie/unpredictable_5k): This dataset contains 5k random tables from the full dataset.
* UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings):
* [UnpredicTable-rated-low](https://huggingface.co/datasets/MicPie/unpredictable_rated-low)
* [UnpredicTable-rated-medium](https://huggingface.co/datasets/MicPie/unpredictable_rated-medium)
* [UnpredicTable-rated-high](https://huggingface.co/datasets/MicPie/unpredictable_rated-high)
* UnpredicTable data subsets based on the website of origin:
* [UnpredicTable-baseball-fantasysports-yahoo-com](https://huggingface.co/datasets/MicPie/unpredictable_baseball-fantasysports-yahoo-com)
* [UnpredicTable-bulbapedia-bulbagarden-net](https://huggingface.co/datasets/MicPie/unpredictable_bulbapedia-bulbagarden-net)
* [UnpredicTable-cappex-com](https://huggingface.co/datasets/MicPie/unpredictable_cappex-com)
* [UnpredicTable-cram-com](https://huggingface.co/datasets/MicPie/unpredictable_cram-com)
* [UnpredicTable-dividend-com](https://huggingface.co/datasets/MicPie/unpredictable_dividend-com)
* [UnpredicTable-dummies-com](https://huggingface.co/datasets/MicPie/unpredictable_dummies-com)
* [UnpredicTable-en-wikipedia-org](https://huggingface.co/datasets/MicPie/unpredictable_en-wikipedia-org)
* [UnpredicTable-ensembl-org](https://huggingface.co/datasets/MicPie/unpredictable_ensembl-org)
* [UnpredicTable-gamefaqs-com](https://huggingface.co/datasets/MicPie/unpredictable_gamefaqs-com)
* [UnpredicTable-mgoblog-com](https://huggingface.co/datasets/MicPie/unpredictable_mgoblog-com)
* [UnpredicTable-mmo-champion-com](https://huggingface.co/datasets/MicPie/unpredictable_mmo-champion-com)
* [UnpredicTable-msdn-microsoft-com](https://huggingface.co/datasets/MicPie/unpredictable_msdn-microsoft-com)
* [UnpredicTable-phonearena-com](https://huggingface.co/datasets/MicPie/unpredictable_phonearena-com)
* [UnpredicTable-sittercity-com](https://huggingface.co/datasets/MicPie/unpredictable_sittercity-com)
* [UnpredicTable-sporcle-com](https://huggingface.co/datasets/MicPie/unpredictable_sporcle-com)
* [UnpredicTable-studystack-com](https://huggingface.co/datasets/MicPie/unpredictable_studystack-com)
* [UnpredicTable-support-google-com](https://huggingface.co/datasets/MicPie/unpredictable_support-google-com)
* [UnpredicTable-w3-org](https://huggingface.co/datasets/MicPie/unpredictable_w3-org)
* [UnpredicTable-wiki-openmoko-org](https://huggingface.co/datasets/MicPie/unpredictable_wiki-openmoko-org)
* [UnpredicTable-wkdu-org](https://huggingface.co/datasets/MicPie/unpredictable_wkdu-org)
* UnpredicTable data subsets based on clustering (for the clustering details please see our publication):
* [UnpredicTable-cluster00](https://huggingface.co/datasets/MicPie/unpredictable_cluster00)
* [UnpredicTable-cluster01](https://huggingface.co/datasets/MicPie/unpredictable_cluster01)
* [UnpredicTable-cluster02](https://huggingface.co/datasets/MicPie/unpredictable_cluster02)
* [UnpredicTable-cluster03](https://huggingface.co/datasets/MicPie/unpredictable_cluster03)
* [UnpredicTable-cluster04](https://huggingface.co/datasets/MicPie/unpredictable_cluster04)
* [UnpredicTable-cluster05](https://huggingface.co/datasets/MicPie/unpredictable_cluster05)
* [UnpredicTable-cluster06](https://huggingface.co/datasets/MicPie/unpredictable_cluster06)
* [UnpredicTable-cluster07](https://huggingface.co/datasets/MicPie/unpredictable_cluster07)
* [UnpredicTable-cluster08](https://huggingface.co/datasets/MicPie/unpredictable_cluster08)
* [UnpredicTable-cluster09](https://huggingface.co/datasets/MicPie/unpredictable_cluster09)
* [UnpredicTable-cluster10](https://huggingface.co/datasets/MicPie/unpredictable_cluster10)
* [UnpredicTable-cluster11](https://huggingface.co/datasets/MicPie/unpredictable_cluster11)
* [UnpredicTable-cluster12](https://huggingface.co/datasets/MicPie/unpredictable_cluster12)
* [UnpredicTable-cluster13](https://huggingface.co/datasets/MicPie/unpredictable_cluster13)
* [UnpredicTable-cluster14](https://huggingface.co/datasets/MicPie/unpredictable_cluster14)
* [UnpredicTable-cluster15](https://huggingface.co/datasets/MicPie/unpredictable_cluster15)
* [UnpredicTable-cluster16](https://huggingface.co/datasets/MicPie/unpredictable_cluster16)
* [UnpredicTable-cluster17](https://huggingface.co/datasets/MicPie/unpredictable_cluster17)
* [UnpredicTable-cluster18](https://huggingface.co/datasets/MicPie/unpredictable_cluster18)
* [UnpredicTable-cluster19](https://huggingface.co/datasets/MicPie/unpredictable_cluster19)
* [UnpredicTable-cluster20](https://huggingface.co/datasets/MicPie/unpredictable_cluster20)
* [UnpredicTable-cluster21](https://huggingface.co/datasets/MicPie/unpredictable_cluster21)
* [UnpredicTable-cluster22](https://huggingface.co/datasets/MicPie/unpredictable_cluster22)
* [UnpredicTable-cluster23](https://huggingface.co/datasets/MicPie/unpredictable_cluster23)
* [UnpredicTable-cluster24](https://huggingface.co/datasets/MicPie/unpredictable_cluster24)
* [UnpredicTable-cluster25](https://huggingface.co/datasets/MicPie/unpredictable_cluster25)
* [UnpredicTable-cluster26](https://huggingface.co/datasets/MicPie/unpredictable_cluster26)
* [UnpredicTable-cluster27](https://huggingface.co/datasets/MicPie/unpredictable_cluster27)
* [UnpredicTable-cluster28](https://huggingface.co/datasets/MicPie/unpredictable_cluster28)
* [UnpredicTable-cluster29](https://huggingface.co/datasets/MicPie/unpredictable_cluster29)
* [UnpredicTable-cluster-noise](https://huggingface.co/datasets/MicPie/unpredictable_cluster-noise)
### Supported Tasks and Leaderboards
Since the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.
The intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.
### Languages
English
## Dataset Structure
### Data Instances
Each task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.
There are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.
### Data Fields
'task': task identifier
'input': column elements of a specific row in the table.
'options': for multiple choice classification, it provides the options to choose from.
'output': target column element of the same row as input.
'pageTitle': the title of the page containing the table.
'outputColName': output column name
'url': url to the website containing the table
'wdcFile': WDC Web Table Corpus file
### Data Splits
The UnpredicTable datasets do not come with additional data splits.
## Dataset Creation
### Curation Rationale
Few-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.
### Source Data
#### Initial Data Collection and Normalization
We use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (http://webdatacommons.org/webtables/2015/EnglishStatistics.html). The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.
#### Who are the source language producers?
The dataset is extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/).
### Annotations
#### Annotation process
Manual annotation was only carried out for the [UnpredicTable-rated-low](https://huggingface.co/datasets/MicPie/unpredictable_rated-low),
[UnpredicTable-rated-medium](https://huggingface.co/datasets/MicPie/unpredictable_rated-medium), and [UnpredicTable-rated-high](https://huggingface.co/datasets/MicPie/unpredictable_rated-high) data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication.
#### Who are the annotators?
Annotations were carried out by a lab assistant.
### Personal and Sensitive Information
The data was extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/), which in turn extracted tables from the [Common Crawl](https://commoncrawl.org/). We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset.
## Considerations for Using the Data
### Social Impact of Dataset
This dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations.
### Discussion of Biases
Since our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset.
### Other Known Limitations
No additional known limitations.
## Additional Information
### Dataset Curators
Jun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez
### Licensing Information
Apache 2.0
### Citation Information
```
@misc{chan2022few,
author = {Chan, Jun Shern and Pieler, Michael and Jao, Jonathan and Scheurer, Jérémy and Perez, Ethan},
title = {Few-shot Adaptation Works with UnpredicTable Data},
publisher={arXiv},
year = {2022},
url = {https://arxiv.org/abs/2208.01009}
}
```
| [
-0.5704367756843567,
-0.5427935719490051,
0.44631144404411316,
0.32006391882896423,
0.08171776682138443,
0.13594937324523926,
-0.11372107267379761,
-0.6336281299591064,
0.5053854584693909,
0.3079053461551666,
-1.0241135358810425,
-0.6626850962638855,
-0.6523168087005615,
0.2303746789693832... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Khedesh/DeepSentiPers | Khedesh | 2022-07-12T11:20:46Z | 23 | 0 | null | [
"license:apache-2.0",
"region:us"
] | 2022-07-12T11:20:46Z | 2022-07-12T10:33:55.000Z | 2022-07-12T10:33:55 | ---
license: apache-2.0
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Bingsu/KcBERT_Pre-Training_Corpus | Bingsu | 2022-07-13T07:26:02Z | 23 | 0 | null | [
"task_categories:fill-mask",
"task_categories:text-generation",
"task_ids:masked-language-modeling",
"task_ids:language-modeling",
"annotations_creators:no-annotation",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:10M<n<100M",
"source_datasets:original",
"langu... | 2022-07-13T07:26:02Z | 2022-07-13T06:18:42.000Z | 2022-07-13T06:18:42 | ---
annotations_creators:
- no-annotation
language_creators:
- crowdsourced
language:
- ko
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
pretty_name: KcBERT Pre-Training Corpus (Korean News Comments)
size_categories:
- 10M<n<100M
source_datasets:
- original
task_categories:
- fill-mask
- text-generation
task_ids:
- masked-language-modeling
- language-modeling
---
# KcBERT Pre-Training Corpus (Korean News Comments)
## Dataset Description
- **Homepage:** [KcBERT Pre-Training Corpus](https://www.kaggle.com/datasets/junbumlee/kcbert-pretraining-corpus-korean-news-comments)
- **Repository:** [Beomi/KcBERT](https://github.com/Beomi/KcBERT)
- **Paper:** [Needs More Information]
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Needs More Information]
## KcBERT
[beomi/kcbert-base](https://huggingface.co/beomi/kcbert-base)
Github KcBERT Repo: [https://github.com/Beomi/KcBERT](https://github.com/Beomi/KcBERT)
KcBERT is Korean Comments BERT pretrained on this Corpus set.
(You can use it via Huggingface's Transformers library!)
This Kaggle Dataset contains **CLEANED** dataset preprocessed with the code below.
```python
import re
import emoji
from soynlp.normalizer import repeat_normalize
emojis = ''.join(emoji.UNICODE_EMOJI.keys())
pattern = re.compile(f'[^ .,?!/@$%~%·∼()\x00-\x7Fㄱ-힣{emojis}]+')
url_pattern = re.compile(
r'https?:\/\/(www\.)?[-a-zA-Z0-9@:%._\+~#=]{1,256}\.[a-zA-Z0-9()]{1,6}\b([-a-zA-Z0-9()@:%_\+.~#?&//=]*)')
def clean(x):
x = pattern.sub(' ', x)
x = url_pattern.sub('', x)
x = x.strip()
x = repeat_normalize(x, num_repeats=2)
return x
```
### License
[CC BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/)
## Dataset Structure
### Data Instance
```pycon
>>> from datasets import load_dataset
>>> dataset = load_dataset("Bingsu/KcBERT_Pre-Training_Corpus")
>>> dataset
DatasetDict({
train: Dataset({
features: ['text'],
num_rows: 86246285
})
})
```
### Data Size
download: 7.90 GiB<br>
generated: 11.86 GiB<br>
total: 19.76 GiB
※ You can download this dataset from [kaggle](https://www.kaggle.com/datasets/junbumlee/kcbert-pretraining-corpus-korean-news-comments), and it's 5 GiB. (12.48 GiB when uncompressed)
### Data Fields
- text: `string`
### Data Splits
| | train |
| ---------- | -------- |
| # of texts | 86246285 |
| [
-0.35783642530441284,
-0.395912766456604,
0.23753201961517334,
0.49970704317092896,
-0.4797445833683014,
0.23470008373260498,
-0.5769453644752502,
-0.07748081535100937,
0.41392815113067627,
0.435139000415802,
-0.5890151858329773,
-0.8827386498451233,
-0.6834689378738403,
0.2573597133159637... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
rajistics/indian_food_images | rajistics | 2022-08-04T17:58:49Z | 23 | 0 | null | [
"task_categories:image-classification",
"region:us"
] | 2022-08-04T17:58:49Z | 2022-07-15T14:40:09.000Z | 2022-07-15T14:40:09 | ---
task_categories:
- image-classification
---
Source of dataset: [Kaggle](https://www.kaggle.com/datasets/l33tc0d3r/indian-food-classification)
This Dataset contains different images of food in 20 different classes. Some of the classes are of Indian food. All the images are extracted from google. Images per classes are little so Data augmentation and transfer learning will be best suited here.
Classes of the model: "burger", "butter_naan", "chai", "chapati", "chole_bhature", "dal_makhani", "dhokla", "fried_rice", "idli", "jalebi", "kaathi_rolls", "kadai_paneer", "kulfi", "masala_dosa", "momos", "paani_puri", "pakode", "pav_bhaji", "pizza", "samosa" | [
-0.26038461923599243,
-0.6742011308670044,
-0.09219653904438019,
-0.10252280533313751,
0.15104854106903076,
0.03591132536530495,
0.16031233966350555,
-0.47519809007644653,
0.021700650453567505,
0.42952242493629456,
-0.19779297709465027,
-0.5364633798599243,
-0.9655588269233704,
0.317524462... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
srivatsavaasista/textgenerator-ds-mini | srivatsavaasista | 2022-07-27T13:05:26Z | 23 | 0 | null | [
"region:us"
] | 2022-07-27T13:05:26Z | 2022-07-27T13:04:59.000Z | 2022-07-27T13:04:59 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ai-forever/Peter | ai-forever | 2022-10-25T11:09:06Z | 23 | 3 | null | [
"task_categories:image-segmentation",
"task_categories:object-detection",
"source_datasets:original",
"language:ru",
"license:mit",
"optical-character-recognition",
"text-detection",
"ocr",
"arxiv:2103.09354",
"region:us"
] | 2022-10-25T11:09:06Z | 2022-08-25T10:03:42.000Z | 2022-08-25T10:03:42 | ---
language:
- ru
license:
- mit
source_datasets:
- original
task_categories:
- image-segmentation
- object-detection
task_ids: []
tags:
- optical-character-recognition
- text-detection
- ocr
---
# Digital Peter
The Peter dataset can be used for reading texts from the manuscripts written by Peter the Great. The dataset annotation contain end-to-end markup for training detection and OCR models, as well as an end-to-end model for reading text from pages.
Paper is available at http://arxiv.org/abs/2103.09354
## Description
Digital Peter is an educational task with a historical slant created on the basis of several AI technologies (Computer Vision, NLP, and knowledge graphs). The task was prepared jointly with the Saint Petersburg Institute of History (N.P.Lihachov mansion) of Russian Academy of Sciences, Federal Archival Agency of Russia and Russian State Archive of Ancient Acts.
A detailed description of the problem (with an immersion in the problem) can be found in [detailed_description_of_the_task_en.pdf](https://github.com/sberbank-ai/digital_peter_aij2020/blob/master/desc/detailed_description_of_the_task_en.pdf)
The dataset consists of 662 full page images and 9696 annotated text files. There are 265788 symbols and approximately 50998 words.
## Annotation format
The annotation is in COCO format. The `annotation.json` should have the following dictionaries:
- `annotation["categories"]` - a list of dicts with a categories info (categotiy names and indexes).
- `annotation["images"]` - a list of dictionaries with a description of images, each dictionary must contain fields:
- `file_name` - name of the image file.
- `id` for image id.
- `annotation["annotations"]` - a list of dictioraties with a murkup information. Each dictionary stores a description for one polygon from the dataset, and must contain the following fields:
- `image_id` - the index of the image on which the polygon is located.
- `category_id` - the polygon’s category index.
- ```attributes``` - dict with some additional annotatioin information. In the `translation` subdict you can find text translation for the line.
- `segmentation` - the coordinates of the polygon, a list of numbers - which are coordinate pairs x and y.
## Competition
We held a competition based on Digital Peter dataset.
Here is github [link](https://github.com/sberbank-ai/digital_peter_aij2020). Here is competition [page](https://ods.ai/tracks/aij2020) (need to register). | [
-0.6244250535964966,
-0.602358341217041,
0.49290475249290466,
0.10859700292348862,
-0.42313167452812195,
-0.168451726436615,
-0.18796996772289276,
-0.6300035715103149,
0.265749454498291,
0.6675684452056885,
-0.37305596470832825,
-0.5580674409866333,
-0.8259137868881226,
0.46176469326019287... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
mrm8488/sst2-es-mt | mrm8488 | 2022-09-03T16:41:42Z | 23 | 0 | null | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:sst2",
"language:es",
"license:unknown",
"region:us"
] | 2022-09-03T16:41:42Z | 2022-09-02T20:28:50.000Z | 2022-09-02T20:28:50 | ---
language:
- es
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- sst2
task_categories:
- text-classification
task_ids:
- sentiment-classification
pretty_name: Stanford Sentiment Treebank v2
---
# STT-2 Spanish
## A Spanish translation (using [EasyNMT](https://github.com/UKPLab/EasyNMT)) of the [SST-2 Dataset](https://huggingface.co/datasets/sst2)
#### For more information check the official [Model Card](https://huggingface.co/datasets/sst2) | [
0.0644933208823204,
-0.7034056782722473,
0.22700470685958862,
0.6469166278839111,
-0.8353930115699768,
0.07266296446323395,
0.21700885891914368,
-0.4393565356731415,
0.6773802042007446,
0.507185697555542,
-0.9025235176086426,
-0.5300082564353943,
-0.6759766936302185,
-0.12283550202846527,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
truongpdd/Covid19-NER-Vietnamese-word | truongpdd | 2022-09-09T05:57:51Z | 23 | 0 | null | [
"region:us"
] | 2022-09-09T05:57:51Z | 2022-09-09T05:57:43.000Z | 2022-09-09T05:57:43 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
crystina-z/xor-tydi | crystina-z | 2022-09-29T02:54:47Z | 23 | 0 | null | [
"region:us"
] | 2022-09-29T02:54:47Z | 2022-09-19T16:11:37.000Z | 2022-09-19T16:11:37 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
j0hngou/ccmatrix_en-it | j0hngou | 2022-09-26T16:34:54Z | 23 | 0 | null | [
"language:en",
"language:it",
"region:us"
] | 2022-09-26T16:34:54Z | 2022-09-19T16:33:17.000Z | 2022-09-19T16:33:17 | ---
language:
- en
- it
--- | [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
artemsnegirev/dialogs_from_jokes | artemsnegirev | 2022-09-27T11:43:32Z | 23 | 1 | null | [
"task_categories:conversational",
"task_ids:dialogue-generation",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"language:ru",
"license:cc0-1.0",
"region:us"
] | 2022-09-27T11:43:32Z | 2022-09-27T11:32:40.000Z | 2022-09-27T11:32:40 | ---
language:
- ru
multilinguality:
- monolingual
pretty_name: Dialogs from Jokes
size_categories:
- 100K<n<1M
task_categories:
- conversational
task_ids:
- dialogue-generation
license: cc0-1.0
---
Converted to json version of dataset from [Koziev/NLP_Datasets](https://github.com/Koziev/NLP_Datasets/blob/master/Conversations/Data/extract_dialogues_from_anekdots.tar.xz) | [
-0.1737520396709442,
-0.6477071642875671,
0.17237861454486847,
0.12184643000364304,
-0.18637700378894806,
0.32684195041656494,
-0.4701114594936371,
-0.17604145407676697,
0.5553216934204102,
1.249867558479309,
-0.8116728067398071,
-0.6125637888908386,
-0.3519001007080078,
0.3326500356197357... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
barkermrl/imagenet-a | barkermrl | 2022-10-05T17:23:33Z | 23 | 1 | null | [
"license:mit",
"region:us"
] | 2022-10-05T17:23:33Z | 2022-10-05T09:56:31.000Z | 2022-10-05T09:56:31 | ---
license: mit
---
The ImageNet-A dataset contains 7,500 natural adversarial examples.
Source: https://github.com/hendrycks/natural-adv-examples.
Also see the ImageNet-C and ImageNet-P datasets at https://github.com/hendrycks/robustness
@article{hendrycks2019nae,
title={Natural Adversarial Examples},
author={Dan Hendrycks and Kevin Zhao and Steven Basart and Jacob Steinhardt and Dawn Song},
journal={arXiv preprint arXiv:1907.07174},
year={2019}
}
There are 200 classes we consider. The WordNet ID and a description of each class is as follows.
n01498041 stingray
n01531178 goldfinch
n01534433 junco
n01558993 American robin
n01580077 jay
n01614925 bald eagle
n01616318 vulture
n01631663 newt
n01641577 American bullfrog
n01669191 box turtle
n01677366 green iguana
n01687978 agama
n01694178 chameleon
n01698640 American alligator
n01735189 garter snake
n01770081 harvestman
n01770393 scorpion
n01774750 tarantula
n01784675 centipede
n01819313 sulphur-crested cockatoo
n01820546 lorikeet
n01833805 hummingbird
n01843383 toucan
n01847000 duck
n01855672 goose
n01882714 koala
n01910747 jellyfish
n01914609 sea anemone
n01924916 flatworm
n01944390 snail
n01985128 crayfish
n01986214 hermit crab
n02007558 flamingo
n02009912 great egret
n02037110 oystercatcher
n02051845 pelican
n02077923 sea lion
n02085620 Chihuahua
n02099601 Golden Retriever
n02106550 Rottweiler
n02106662 German Shepherd Dog
n02110958 pug
n02119022 red fox
n02123394 Persian cat
n02127052 lynx
n02129165 lion
n02133161 American black bear
n02137549 mongoose
n02165456 ladybug
n02174001 rhinoceros beetle
n02177972 weevil
n02190166 fly
n02206856 bee
n02219486 ant
n02226429 grasshopper
n02231487 stick insect
n02233338 cockroach
n02236044 mantis
n02259212 leafhopper
n02268443 dragonfly
n02279972 monarch butterfly
n02280649 small white
n02281787 gossamer-winged butterfly
n02317335 starfish
n02325366 cottontail rabbit
n02346627 porcupine
n02356798 fox squirrel
n02361337 marmot
n02410509 bison
n02445715 skunk
n02454379 armadillo
n02486410 baboon
n02492035 white-headed capuchin
n02504458 African bush elephant
n02655020 pufferfish
n02669723 academic gown
n02672831 accordion
n02676566 acoustic guitar
n02690373 airliner
n02701002 ambulance
n02730930 apron
n02777292 balance beam
n02782093 balloon
n02787622 banjo
n02793495 barn
n02797295 wheelbarrow
n02802426 basketball
n02814860 lighthouse
n02815834 beaker
n02837789 bikini
n02879718 bow
n02883205 bow tie
n02895154 breastplate
n02906734 broom
n02948072 candle
n02951358 canoe
n02980441 castle
n02992211 cello
n02999410 chain
n03014705 chest
n03026506 Christmas stocking
n03124043 cowboy boot
n03125729 cradle
n03187595 rotary dial telephone
n03196217 digital clock
n03223299 doormat
n03250847 drumstick
n03255030 dumbbell
n03291819 envelope
n03325584 feather boa
n03355925 flagpole
n03384352 forklift
n03388043 fountain
n03417042 garbage truck
n03443371 goblet
n03444034 go-kart
n03445924 golf cart
n03452741 grand piano
n03483316 hair dryer
n03584829 clothes iron
n03590841 jack-o'-lantern
n03594945 jeep
n03617480 kimono
n03666591 lighter
n03670208 limousine
n03717622 manhole cover
n03720891 maraca
n03721384 marimba
n03724870 mask
n03775071 mitten
n03788195 mosque
n03804744 nail
n03837869 obelisk
n03840681 ocarina
n03854065 organ
n03888257 parachute
n03891332 parking meter
n03935335 piggy bank
n03982430 billiard table
n04019541 hockey puck
n04033901 quill
n04039381 racket
n04067472 reel
n04086273 revolver
n04099969 rocking chair
n04118538 rugby ball
n04131690 salt shaker
n04133789 sandal
n04141076 saxophone
n04146614 school bus
n04147183 schooner
n04179913 sewing machine
n04208210 shovel
n04235860 sleeping bag
n04252077 snowmobile
n04252225 snowplow
n04254120 soap dispenser
n04270147 spatula
n04275548 spider web
n04310018 steam locomotive
n04317175 stethoscope
n04344873 couch
n04347754 submarine
n04355338 sundial
n04366367 suspension bridge
n04376876 syringe
n04389033 tank
n04399382 teddy bear
n04442312 toaster
n04456115 torch
n04482393 tricycle
n04507155 umbrella
n04509417 unicycle
n04532670 viaduct
n04540053 volleyball
n04554684 washing machine
n04562935 water tower
n04591713 wine bottle
n04606251 shipwreck
n07583066 guacamole
n07695742 pretzel
n07697313 cheeseburger
n07697537 hot dog
n07714990 broccoli
n07718472 cucumber
n07720875 bell pepper
n07734744 mushroom
n07749582 lemon
n07753592 banana
n07760859 custard apple
n07768694 pomegranate
n07831146 carbonara
n09229709 bubble
n09246464 cliff
n09472597 volcano
n09835506 baseball player
n11879895 rapeseed
n12057211 yellow lady's slipper
n12144580 corn
n12267677 acorn | [
-0.8839462995529175,
-0.5997164845466614,
0.03611546754837036,
0.2817186117172241,
0.038405947387218475,
0.1859251707792282,
0.38643592596054077,
-0.45247960090637207,
0.6107352375984192,
0.32879894971847534,
-0.5350159406661987,
-0.20257139205932617,
-0.8889873623847961,
-0.05720039829611... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
juliensimon/autotrain-data-chest-xray-demo | juliensimon | 2022-10-06T09:15:55Z | 23 | 0 | null | [
"task_categories:image-classification",
"region:us"
] | 2022-10-06T09:15:55Z | 2022-10-06T08:25:44.000Z | 2022-10-06T08:25:44 | ---
task_categories:
- image-classification
---
# AutoTrain Dataset for project: chest-xray-demo
## Dataset Description
This dataset has been automatically processed by AutoTrain for project chest-xray-demo.
The original dataset is located at https://www.kaggle.com/datasets/paultimothymooney/chest-xray-pneumonia
## Dataset Structure
```
├── train
│ ├── NORMAL
│ └── PNEUMONIA
└── valid
├── NORMAL
└── PNEUMONIA
```
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"image": "<2090x1858 L PIL image>",
"target": 0
},
{
"image": "<1422x1152 L PIL image>",
"target": 0
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"image": "Image(decode=True, id=None)",
"target": "ClassLabel(num_classes=2, names=['NORMAL', 'PNEUMONIA'], id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follows:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 5216 |
| valid | 624 |
| [
-0.30870580673217773,
0.19015097618103027,
0.24855010211467743,
0.013688415288925171,
-0.43667730689048767,
-0.046645794063806534,
0.10439823567867279,
-0.013687333092093468,
0.20303788781166077,
0.49282002449035645,
-0.6694222092628479,
-0.7174903750419617,
-0.8606187701225281,
0.07166408... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
allenai/multixscience_dense_oracle | allenai | 2022-11-18T19:57:37Z | 23 | 1 | multi-xscience | [
"task_categories:summarization",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:unknown",
"region:us"
] | 2022-11-18T19:57:37Z | 2022-10-12T13:30:45.000Z | 2022-10-12T13:30:45 | ---
annotations_creators:
- found
language_creators:
- found
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- summarization
paperswithcode_id: multi-xscience
pretty_name: Multi-XScience
---
This is a copy of the [Multi-XScience](https://huggingface.co/datasets/multi_x_science_sum) dataset, except the input source documents of the `train`, `validation`, and `test` splits have been replaced by a __dense__ retriever. The retrieval pipeline used:
- __query__: The `related_work` field of each example
- __corpus__: The union of all documents in the `train`, `validation` and `test` splits
- __retriever__: [`facebook/contriever-msmarco`](https://huggingface.co/facebook/contriever-msmarco) via [PyTerrier](https://pyterrier.readthedocs.io/en/latest/) with default settings
- __top-k strategy__: `"oracle"`, i.e. the number of documents retrieved, `k`, is set as the original number of input documents for each example
Retrieval results on the `train` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.5270 | 0.2005 | 0.2005 | 0.2005 |
Retrieval results on the `validation` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.5310 | 0.2026 | 0.2026 | 0.2026 |
Retrieval results on the `test` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.5229 | 0.2081 | 0.2081 | 0.2081 | | [
-0.3151857852935791,
-0.10956830531358719,
0.3051331639289856,
0.11195564270019531,
-0.17404253780841827,
0.032800428569316864,
-0.0875968486070633,
0.04566272348165512,
0.7233825325965881,
0.570819616317749,
-0.6631649732589722,
-0.5072017908096313,
-0.5969899892807007,
0.0580794773995876... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
alfredodeza/wine-ratings | alfredodeza | 2022-10-15T13:09:06Z | 23 | 2 | null | [
"region:us"
] | 2022-10-15T13:09:06Z | 2022-10-14T12:28:47.000Z | 2022-10-14T12:28:47 | ---
dataset_info:
features:
- name: name
dtype: string
- name: region
dtype: string
- name: variety
dtype: string
- name: rating
dtype: float32
- name: notes
dtype: string
splits:
- name: test
num_bytes: 82422
num_examples: 200
- name: train
num_bytes: 13538613
num_examples: 32780
- name: validation
num_bytes: 83047
num_examples: 200
download_size: 0
dataset_size: 13704082
---
# wine-ratings
Processing, EDA, and ML on wine ratings | [
-0.4598166048526764,
-0.34689608216285706,
0.8852745890617371,
0.9841010570526123,
-0.6193612813949585,
-0.2350807785987854,
0.04746202751994133,
-0.5234929323196411,
0.6662524938583374,
0.6697006225585938,
-0.6503274440765381,
-0.5514200925827026,
-0.6300196051597595,
-0.02880464866757393... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Harsit/xnli2.0_train_bulgarian | Harsit | 2022-10-15T09:15:06Z | 23 | 1 | null | [
"region:us"
] | 2022-10-15T09:15:06Z | 2022-10-15T09:14:29.000Z | 2022-10-15T09:14:29 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
TheoTsio/Health_Misinfo | TheoTsio | 2023-08-28T21:51:26Z | 23 | 0 | null | [
"task_categories:text-classification",
"size_categories:1K<n<10K",
"language:en",
"health_misinformation, credibility",
"region:us"
] | 2023-08-28T21:51:26Z | 2022-10-19T12:45:11.000Z | 2022-10-19T12:45:11 | ---
task_categories:
- text-classification
language:
- en
tags:
- health_misinformation, credibility
size_categories:
- 1K<n<10K
---
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
The health misinfo dataset is an English Document dataset containing just over 6k unique articles related to health issues from web. This dataset was created in an effort to detect the misinformation in health documents. This dataset was created from the relevance judgment of the TREC health misinformation
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | [
-0.15185153484344482,
-0.340846449136734,
-0.0035337149165570736,
0.18967179954051971,
-0.22490520775318146,
-0.09802597761154175,
0.07063911855220795,
-0.5155460238456726,
0.4940892159938812,
0.5941541790962219,
-0.7667379975318909,
-1.0784040689468384,
-0.7503523230552673,
0.284054428339... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
projecte-aina/GuiaCat | projecte-aina | 2023-11-25T06:27:37Z | 23 | 1 | null | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"task_ids:sentiment-scoring",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"language:ca",
"license:cc-by-nc-nd-4.0",
"region:us"
] | 2023-11-25T06:27:37Z | 2022-10-24T11:11:31.000Z | 2022-10-24T11:11:31 | ---
annotations_creators:
- found
language_creators:
- found
language:
- ca
license:
- cc-by-nc-nd-4.0
multilinguality:
- monolingual
pretty_name: GuiaCat
task_categories:
- text-classification
task_ids:
- sentiment-classification
- sentiment-scoring
---
# Dataset Card for GuiaCat
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Point of Contact:** [blanca.calvo@bsc.es](blanca.calvo@bsc.es)
### Dataset Summary
GuiaCat is a dataset consisting of 5.750 restaurant reviews in Catalan, with 5 associated scores and a label of sentiment. The data was provided by [GuiaCat](https://guiacat.cat) and curated by the BSC.
This work is licensed under a [Creative Commons Attribution Non-commercial No-Derivatives 4.0 International License](https://creativecommons.org/licenses/by-nc-nd/4.0/).
### Supported Tasks and Leaderboards
This corpus is mainly intended for sentiment analysis.
### Languages
The dataset is in Catalan (`ca-ES`).
## Dataset Structure
The dataset consists of restaurant reviews labelled with 5 scores: service, food, price-quality, environment, and average. Reviews also have a sentiment label, derived from the average score, all stored as a csv file.
### Data Instances
```
7,7,7,7,7.0,"Aquest restaurant té una llarga història. Ara han tornat a canviar d'amos i aquest canvi s'ha vist molt repercutit en la carta, preus, servei, etc. Hi ha molta varietat de menjar, i tot boníssim, amb especialitats molt ben trobades. El servei molt càlid i agradable, dóna gust que et serveixin així. I la decoració molt agradable també, bastant curiosa. En fi, pel meu gust, un bon restaurant i bé de preu.",bo
8,9,8,7,8.0,"Molt recomanable en tots els sentits. El servei és molt atent, pulcre i gens agobiant; alhora els plats també presenten un aspecte acurat, cosa que fa, juntament amb l'ambient, que t'oblidis de que, malauradament, està situat pròxim a l'autopista.Com deia, l'ambient és molt acollidor, té un menjador principal molt elegant, perfecte per quedar bé amb tothom!Tot i això, destacar la bona calitat / preu, ja que aquest restaurant té una carta molt extensa en totes les branques i completa, tant de menjar com de vins. Pel qui entengui de vins, podriem dir que tot i tenir una carta molt rica, es recolza una mica en els clàssics.",molt bo
```
### Data Fields
- service: a score from 0 to 10 grading the service
- food: a score from 0 to 10 grading the food
- price-quality: a score from 0 to 10 grading the relation between price and quality
- environment: a score from 0 to 10 grading the environment
- avg: average of all the scores
- text: the review
- label: it can be "molt bo", "bo", "regular", "dolent", "molt dolent"
### Data Splits
* dev.csv: 500 examples
* test.csv: 500 examples
* train.csv: 4,750 examples
## Dataset Creation
### Curation Rationale
We created this corpus to contribute to the development of language models in Catalan, a low-resource language.
### Source Data
The data of this dataset has been provided by [GuiaCat](https://guiacat.cat).
#### Initial Data Collection and Normalization
[N/A]
#### Who are the source language producers?
The language producers were the users from GuiaCat.
### Annotations
The annotations are automatically derived from the scores that the users provided while reviewing the restaurants.
#### Annotation process
The mapping between average scores and labels is:
- Higher than 8: molt bo
- Between 8 and 6: bo
- Between 6 and 4: regular
- Between 4 and 2: dolent
- Less than 2: molt dolent
#### Who are the annotators?
Users
### Personal and Sensitive Information
No personal information included, although it could contain hate or abusive language.
## Considerations for Using the Data
### Social Impact of Dataset
We hope this corpus contributes to the development of language models in Catalan, a low-resource language.
### Discussion of Biases
We are aware that this data might contain biases. We have not applied any steps to reduce their impact.
### Other Known Limitations
[N/A]
## Additional Information
### Dataset Curators
Text Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@bsc.es).
This work was funded by the [Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya](https://politiquesdigitals.gencat.cat/ca/inici/index.html#googtrans(ca|en) within the framework of [Projecte AINA](https://politiquesdigitals.gencat.cat/ca/economia/catalonia-ai/aina).
### Licensing Information
This work is licensed under a [Creative Commons Attribution Non-commercial No-Derivatives 4.0 International License](https://creativecommons.org/licenses/by-nc-nd/4.0/).
### Citation Information
```
```
### Contributions
We want to thank GuiaCat for providing this data.
| [
-0.36326974630355835,
-0.6492356657981873,
0.17949040234088898,
0.31322479248046875,
-0.11690796911716461,
0.060351088643074036,
-0.06758199632167816,
-0.30164095759391785,
0.6576605439186096,
0.7848957180976868,
-0.30084356665611267,
-1.0221933126449585,
-0.4664938449859619,
0.23154656589... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Aunsiels/Quasimodo | Aunsiels | 2022-10-24T12:30:23Z | 23 | 1 | null | [
"task_categories:question-answering",
"task_ids:closed-domain-qa",
"annotations_creators:machine-generated",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:100M<n<1B",
"source_datasets:original",
"language:en",
"license:cc-by-2.0",
"knowledge base",
"commo... | 2022-10-24T12:30:23Z | 2022-10-24T12:01:21.000Z | 2022-10-24T12:01:21 | ---
annotations_creators:
- machine-generated
language:
- en
language_creators:
- machine-generated
license:
- cc-by-2.0
multilinguality:
- monolingual
pretty_name: Quasimodo
size_categories:
- 100M<n<1B
source_datasets:
- original
tags:
- knowledge base
- commonsense
task_categories:
- question-answering
task_ids:
- closed-domain-qa
---
# Dataset Card for Quasimodo
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Dataset Creation](#dataset-creation)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://www.mpi-inf.mpg.de/departments/databases-and-information-systems/research/yago-naga/commonsense/quasimodo
- **Repository:** https://github.com/Aunsiels/CSK
- **Paper:** Romero et al., Commonsense Properties from Query Logs and Question Answering Forums, CIKM, 2019
### Dataset Summary
A commonsense knowledge base constructed automatically from question-answering forums and query logs.
### Supported Tasks and Leaderboards
Can be useful for tasks requiring external knowledge such as question answering.
### Languages
English
## Dataset Structure
### Data Instances
```python
{
"subject": "elephant",
"predicate": "has_body_part"
"object": "trunk",
"modality": "TBC[so long trunks] x#x2 // TBC[long trunks] x#x9 // TBC[big trunks] x#x6 // TBC[long trunk] x#x1 // TBC[such big trunks] x#x1 0 0.9999667967035647 elephants have trunks x#x34 x#xGoogle Autocomplete, Bing Autocomplete, Yahoo Questions, Answers.com Questions, Reddit Questions // a elephants have trunks x#x2 x#xGoogle Autocomplete // a elephant have a trunk x#x2 x#xGoogle Autocomplete // elephants have so long trunks x#x2 x#xGoogle Autocomplete // elephants have long trunks x#x8 x#xGoogle Autocomplete, Yahoo Questions, Answers.com Questions // elephants have big trunks x#x6 x#xGoogle Autocomplete, Answers.com Questions, Reddit Questions // elephants have trunk x#x3 x#xGoogle Autocomplete, Yahoo Questions // elephant have long trunks x#x1 x#xGoogle Autocomplete // elephant has a trunk x#x1 x#xGoogle Autocomplete // elephants have a trunk x#x2 x#xAnswers.com Questions // an elephant has a long trunk x#x1 x#xAnswers.com Questions // elephant have trunks x#x1 x#xAnswers.com Questions // elephants have such big trunks x#x1 x#xReddit Questions",
"score": 0.9999667967668732,
"local_sigma": 1.0
}
```
### Data Fields
- subject: The subject of the triple
- predicate: The predicate of the triple
- object: The object of the triple
- modality: Modalities associated with the triples with their counts. TBC means the object can be further refined to the listed objects
- is_negative: 1 if the statement was negated
- score: salience score of the supervised scoring model
- local sigma: strict conditional probability of observing a (predicate, object) with a specific subject. I.e., a measure of how unique a statement is. E.g., local_sigma(lawyers, defend, serial_killers) = 1, local_sigma(lawyers, make, money) = 0.01, even though both statements have a similar score of 0.99.
## Dataset Creation
See original paper.
## Additional Information
### Licensing Information
CC-BY 2.0
### Citation Information
Romero et al., Commonsense Properties from Query Logs and Question Answering Forums, CIKM, 2019
| [
-0.752993106842041,
-0.7089319229125977,
0.44758105278015137,
0.13452576100826263,
-0.4592857360839844,
-0.16356079280376434,
-0.15125605463981628,
-0.5279937982559204,
0.4459998905658722,
0.24002669751644135,
-0.710999608039856,
-0.6170535683631897,
-0.379768967628479,
0.19970349967479706... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
julianmoraes/nouns-traits-captions | julianmoraes | 2022-10-25T02:31:12Z | 23 | 0 | null | [
"region:us"
] | 2022-10-25T02:31:12Z | 2022-10-25T02:31:10.000Z | 2022-10-25T02:31:10 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Muennighoff/P3 | Muennighoff | 2022-11-03T15:15:39Z | 23 | 11 | null | [
"task_categories:other",
"annotations_creators:crowdsourced",
"annotations_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:100M<n<1B",
"language:en",
"license:apache-2.0",
"region:us"
] | 2022-11-03T15:15:39Z | 2022-10-25T20:29:10.000Z | 2022-10-25T20:29:10 | ---
annotations_creators:
- crowdsourced
- expert-generated
language:
- en
license:
- apache-2.0
multilinguality:
- monolingual
pretty_name: P3
size_categories:
- 100M<n<1B
task_categories:
- other
---
This is a repreprocessed version of [P3](https://huggingface.co/datasets/bigscience/P3) with any updates that have been made to the P3 datasets since the release of the original P3. It is used for the finetuning of [bloomz-p3](https://huggingface.co/bigscience/bloomz-p3) & [mt0-xxl-p3](https://huggingface.co/bigscience/mt0-xxl-p3). The script is available [here](https://github.com/bigscience-workshop/bigscience/blob/638e66e40395dbfab9fa08a662d43b317fb2eb38/data/p3/prepare_p3.py).
| [
-0.31165727972984314,
-0.3184846341609955,
0.6062166690826416,
0.5304094552993774,
0.03879811242222786,
-0.3133208155632019,
-0.2400447577238083,
-0.09115079790353775,
0.3608440160751343,
0.7653661966323853,
-1.0820703506469727,
-0.5794418454170227,
-0.06920779496431351,
0.1942633539438247... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
VietAI/vi_pubmed | VietAI | 2022-11-07T01:12:52Z | 23 | 6 | pubmed | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"language:vi",
"language:en",
"license:cc",
"arxiv:2210.05610",
"arxiv:2210.05598",
"region:us"
] | 2022-11-07T01:12:52Z | 2022-11-06T01:36:50.000Z | 2022-11-06T01:36:50 | ---
license: cc
language:
- vi
- en
task_categories:
- text-generation
- fill-mask
task_ids:
- language-modeling
- masked-language-modeling
paperswithcode_id: pubmed
dataset_info:
features:
- name: en
dtype: string
- name: vi
dtype: string
splits:
- name: pubmed22
num_bytes: 44360028980
num_examples: 20087006
download_size: 23041004247
dataset_size: 44360028980
---
# Dataset Summary
20M Vietnamese PubMed biomedical abstracts translated by the [state-of-the-art English-Vietnamese Translation project](https://arxiv.org/abs/2210.05610). The data has been used as unlabeled dataset for [pretraining a Vietnamese Biomedical-domain Transformer model](https://arxiv.org/abs/2210.05598).

image source: [Enriching Biomedical Knowledge for Vietnamese Low-resource Language Through Large-Scale Translation](https://arxiv.org/abs/2210.05598)
# Language
- English: Original biomedical abstracts from [Pubmed](https://www.nlm.nih.gov/databases/download/pubmed_medline_faq.html)
- Vietnamese: Synthetic abstract translated by a [state-of-the-art English-Vietnamese Translation project](https://arxiv.org/abs/2210.05610)
# Dataset Structure
- The English sequences are
- The Vietnamese sequences are
# Source Data - Initial Data Collection and Normalization
https://www.nlm.nih.gov/databases/download/pubmed_medline_faq.html
# Licensing Information
[Courtesy of the U.S. National Library of Medicine.](https://www.nlm.nih.gov/databases/download/terms_and_conditions.html)
# Citation
```
@misc{mtet,
doi = {10.48550/ARXIV.2210.05610},
url = {https://arxiv.org/abs/2210.05610},
author = {Ngo, Chinh and Trinh, Trieu H. and Phan, Long and Tran, Hieu and Dang, Tai and Nguyen, Hieu and Nguyen, Minh and Luong, Minh-Thang},
keywords = {Computation and Language (cs.CL), Artificial Intelligence (cs.AI), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {MTet: Multi-domain Translation for English and Vietnamese},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
```
@misc{vipubmed,
doi = {10.48550/ARXIV.2210.05598},
url = {https://arxiv.org/abs/2210.05598},
author = {Phan, Long and Dang, Tai and Tran, Hieu and Phan, Vy and Chau, Lam D. and Trinh, Trieu H.},
keywords = {Computation and Language (cs.CL), Artificial Intelligence (cs.AI), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Enriching Biomedical Knowledge for Vietnamese Low-resource Language Through Large-Scale Translation},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
``` | [
-0.14885587990283966,
-0.6968080401420593,
0.4768933653831482,
0.20982655882835388,
-0.4088820517063141,
-0.005761283915489912,
-0.3047400712966919,
-0.3530397117137909,
0.04616833105683327,
0.6550430059432983,
-0.2630012631416321,
-0.6299631595611572,
-0.8163744807243347,
0.52290558815002... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
bigbio/lll | bigbio | 2022-12-22T15:44:52Z | 23 | 2 | null | [
"multilinguality:monolingual",
"language:en",
"license:unknown",
"region:us"
] | 2022-12-22T15:44:52Z | 2022-11-13T22:09:11.000Z | 2022-11-13T22:09:11 |
---
language:
- en
bigbio_language:
- English
license: unknown
multilinguality: monolingual
bigbio_license_shortname: UNKNOWN
pretty_name: LLL05
homepage: http://genome.jouy.inra.fr/texte/LLLchallenge
bigbio_pubmed: True
bigbio_public: True
bigbio_tasks:
- RELATION_EXTRACTION
---
# Dataset Card for LLL05
## Dataset Description
- **Homepage:** http://genome.jouy.inra.fr/texte/LLLchallenge
- **Pubmed:** True
- **Public:** True
- **Tasks:** RE
The LLL05 challenge task is to learn rules to extract protein/gene interactions from biology abstracts from the Medline
bibliography database. The goal of the challenge is to test the ability of the participating IE systems to identify the
interactions and the gene/proteins that interact. The participants will test their IE patterns on a test set with the
aim of extracting the correct agent and target.The challenge focuses on information extraction of gene interactions in
Bacillus subtilis. Extracting gene interaction is the most popular event IE task in biology. Bacillus subtilis (Bs) is
a model bacterium and many papers have been published on direct gene interactions involved in sporulation. The gene
interactions are generally mentioned in the abstract and the full text of the paper is not needed. Extracting gene
interaction means, extracting the agent (proteins) and the target (genes) of all couples of genic interactions from
sentences.
## Citation Information
```
@article{article,
author = {Nédellec, C.},
year = {2005},
month = {01},
pages = {},
title = {Learning Language in Logic - Genic Interaction Extraction Challenge},
journal = {Proceedings of the Learning Language in Logic 2005 Workshop at the International Conference on Machine Learning}
}
```
| [
-0.35549452900886536,
-0.48361077904701233,
0.27527210116386414,
0.0215115025639534,
-0.2542705237865448,
-0.14685553312301636,
0.16378642618656158,
-0.5873423218727112,
0.30764538049697876,
0.07758873701095581,
-1.0058650970458984,
-0.475581556558609,
-0.6055830121040344,
0.63515782356262... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
WillHeld/wmt19-valid-only-zh_en | WillHeld | 2022-11-14T18:59:26Z | 23 | 0 | null | [
"region:us"
] | 2022-11-14T18:59:26Z | 2022-11-14T18:59:22.000Z | 2022-11-14T18:59:22 | ---
dataset_info:
features:
- name: translation
dtype:
translation:
languages:
- zh
- en
splits:
- name: validation
num_bytes: 1107522
num_examples: 3981
download_size: 719471
dataset_size: 1107522
---
# Dataset Card for "wmt19-valid-only-zh_en"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5335052013397217,
-0.372542142868042,
0.4465775787830353,
0.27674436569213867,
-0.6460850834846497,
-0.14066405594348907,
-0.05555117130279541,
-0.25617605447769165,
0.8178665637969971,
0.5760321021080017,
-1.1647709608078003,
-0.8930492401123047,
-0.5684771537780762,
0.0401489846408367... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
autoevaluate/autoeval-eval-futin__feed-sen_en-395337-2175269956 | autoevaluate | 2022-11-21T05:57:00Z | 23 | 0 | null | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-21T05:57:00Z | 2022-11-21T05:02:34.000Z | 2022-11-21T05:02:34 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- futin/feed
eval_info:
task: text_zero_shot_classification
model: bigscience/bloom-3b
metrics: []
dataset_name: futin/feed
dataset_config: sen_en
dataset_split: test
col_mapping:
text: text
classes: classes
target: target
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloom-3b
* Dataset: futin/feed
* Config: sen_en
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@futin](https://huggingface.co/futin) for evaluating this model. | [
-0.17978709936141968,
-0.35099831223487854,
0.4996345341205597,
0.10715118050575256,
0.12449964880943298,
-0.20992086827754974,
-0.021501194685697556,
-0.4804038107395172,
-0.0032300264574587345,
0.31474769115448,
-0.9182133078575134,
-0.2581559717655182,
-0.6515693664550781,
0.00562077574... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
declare-lab/HyperRED | declare-lab | 2022-11-23T10:55:14Z | 23 | 2 | null | [
"license:cc-by-sa-3.0",
"arxiv:2211.10018",
"region:us"
] | 2022-11-23T10:55:14Z | 2022-11-22T07:46:53.000Z | 2022-11-22T07:46:53 | ---
license: cc-by-sa-3.0
---
# Dataset Card for HyperRED
## Description
- **Repository:** https://github.com/declare-lab/HyperRED
- **Paper (EMNLP 2022):** https://arxiv.org/abs/2211.10018
### Summary
HyperRED is a dataset for the new task of hyper-relational extraction, which extracts relation triplets together with qualifier information such as time, quantity or location. For example, the relation triplet (Leonard Parker, Educated At, Harvard University) can be factually enriched by including the qualifier (End Time, 1967). HyperRED contains 44k sentences with 62 relation types and 44 qualifier types.
### Languages
English.
## Dataset Structure
### Data Fields
- **tokens:** Sentence text tokens.
- **entities:** List of each entity span. The span indices correspond to each token in the space-separated text (inclusive-start and exclusive-end index)
- **relations:** List of each relationship label between the head and tail entity spans. Each relation contains a list of qualifiers where each qualifier has the value entity span and qualifier label.
### Data Instances
An example instance of the dataset is shown below:
```
{
"tokens": ['Acadia', 'University', 'is', 'a', 'predominantly', 'undergraduate', 'university', 'located', 'in', 'Wolfville', ',', 'Nova', 'Scotia', ',', 'Canada', 'with', 'some', 'graduate', 'programs', 'at', 'the', 'master', "'", 's', 'level', 'and', 'one', 'at', 'the', 'doctoral', 'level', '.'],
"entities": [
{'span': (0, 2), 'label': 'Entity'},
{'span': (9, 13), 'label': 'Entity'},
{'span': (14, 15), 'label': 'Entity'},
],
"relations": [
{
"head": [0, 2],
"tail": [9, 13],
"label": "headquarters location",
"qualifiers": [
{"span": [14, 15], "label": "country"}
]
}
],
}
```
### Data Splits
The dataset contains 39,840 instances for training, 1,000 instances for validation and 4,000 instances for testing.
### Dataset Creation
The dataset is constructed from distant supervision between Wikipedia and Wikidata, and the human annotation process is detailed in the paper.
## Citation Information
```
@inproceedings{chia2022hyperred,
title={A Dataset for Hyper-Relational Extraction and a Cube-Filling Approach},
author={Yew Ken Chia, Lidong Bing, Sharifah Mahani Aljunied, Luo Si and Soujanya Poria},
booktitle={Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing},
year={2022}
}
``` | [
-0.6157450079917908,
-0.7165709733963013,
0.38618138432502747,
0.089649997651577,
-0.0403410866856575,
-0.2413577437400818,
-0.2756786644458771,
-0.47358381748199463,
0.2596447169780731,
0.36477190256118774,
-0.5986331701278687,
-0.9997451305389404,
-0.31457123160362244,
0.5491823554039001... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Sachinkelenjaguri/webnlg_Table_to_Text | Sachinkelenjaguri | 2022-11-29T13:25:00Z | 23 | 0 | null | [
"region:us"
] | 2022-11-29T13:25:00Z | 2022-11-29T13:23:18.000Z | 2022-11-29T13:23:18 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Ansa00/mnist_images | Ansa00 | 2022-12-08T11:08:36Z | 23 | 0 | null | [
"region:us"
] | 2022-12-08T11:08:36Z | 2022-12-08T11:07:40.000Z | 2022-12-08T11:07:40 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': '0'
'1': '1'
'2': '2'
'3': '3'
'4': '4'
'5': '5'
'6': '6'
'7': '7'
'8': '8'
'9': '9'
splits:
- name: train
num_bytes: 594553015.3377689
num_examples: 51021
- name: test
num_bytes: 104426905.66123116
num_examples: 9004
download_size: 523900419
dataset_size: 698979920.9990001
---
# Dataset Card for "mnist_images"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6693387031555176,
-0.11253248900175095,
0.22372764348983765,
0.23836228251457214,
-0.48335641622543335,
-0.14986205101013184,
0.371076375246048,
-0.18857435882091522,
1.1587953567504883,
0.6524457335472107,
-0.8294707536697388,
-0.8799199461936951,
-0.744477391242981,
-0.142145469784736... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ZihaoLin/zhlds | ZihaoLin | 2022-12-16T20:26:09Z | 23 | 0 | null | [
"task_categories:image-classification",
"task_categories:object-detection",
"task_ids:multi-class-image-classification",
"size_categories:10M<n<100M",
"source_datasets:original",
"language:en",
"license:other",
"region:us"
] | 2022-12-16T20:26:09Z | 2022-12-11T20:34:47.000Z | 2022-12-11T20:34:47 | ---
annotations_creators: []
language:
- en
language_creators: []
license:
- other
multilinguality: []
pretty_name: This is a test version for ELEVATER benchmark.
size_categories:
- 10M<n<100M
source_datasets:
- original
tags: []
task_categories:
- image-classification
- object-detection
task_ids:
- multi-class-image-classification
---
# Dataset Card for [Dataset Name]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset. | [
-0.47841677069664,
-0.5084842443466187,
0.14602938294410706,
0.278889000415802,
-0.21702472865581512,
0.24832050502300262,
-0.3366999328136444,
-0.3758932054042816,
0.6720380783081055,
0.6457639932632446,
-0.9167346358299255,
-1.2200127840042114,
-0.7551794052124023,
0.07273735105991364,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
proteinea/remote_homology | proteinea | 2022-12-12T16:20:18Z | 23 | 2 | null | [
"doi:10.57967/hf/1107",
"region:us"
] | 2022-12-12T16:20:18Z | 2022-12-12T15:55:43.000Z | 2022-12-12T15:55:43 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
irds/nyt | irds | 2023-01-05T03:47:43Z | 23 | 0 | null | [
"task_categories:text-retrieval",
"region:us"
] | 2023-01-05T03:47:43Z | 2023-01-05T03:47:37.000Z | 2023-01-05T03:47:37 | ---
pretty_name: '`nyt`'
viewer: false
source_datasets: []
task_categories:
- text-retrieval
---
# Dataset Card for `nyt`
The `nyt` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/nyt#nyt).
# Data
This dataset provides:
- `docs` (documents, i.e., the corpus); count=1,864,661
This dataset is used by: [`nyt_trec-core-2017`](https://huggingface.co/datasets/irds/nyt_trec-core-2017), [`nyt_wksup`](https://huggingface.co/datasets/irds/nyt_wksup), [`nyt_wksup_train`](https://huggingface.co/datasets/irds/nyt_wksup_train), [`nyt_wksup_valid`](https://huggingface.co/datasets/irds/nyt_wksup_valid)
## Usage
```python
from datasets import load_dataset
docs = load_dataset('irds/nyt', 'docs')
for record in docs:
record # {'doc_id': ..., 'headline': ..., 'body': ..., 'source_xml': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
## Citation Information
```
@article{Sandhaus2008Nyt,
title={The new york times annotated corpus},
author={Sandhaus, Evan},
journal={Linguistic Data Consortium, Philadelphia},
volume={6},
number={12},
pages={e26752},
year={2008}
}
```
| [
-0.24114729464054108,
-0.36527660489082336,
0.026274381205439568,
-0.04796985536813736,
-0.37400108575820923,
0.15197879076004028,
-0.2061195969581604,
-0.3474862277507782,
0.6253728270530701,
0.27413633465766907,
-0.27364063262939453,
-0.6888989210128784,
-0.5896034240722656,
0.3696103096... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
DFKI-SLT/gids | DFKI-SLT | 2023-01-11T10:06:07Z | 23 | 0 | null | [
"task_categories:text-classification",
"task_ids:multi-class-classification",
"annotations_creators:other",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100k",
"source_datasets:extended|other",
"language:en",
"license:other",
"relation extraction",
"arxiv:1804... | 2023-01-11T10:06:07Z | 2023-01-06T12:24:59.000Z | 2023-01-06T12:24:59 | ---
annotations_creators:
- other
language:
- en
language_creators:
- found
license:
- other
multilinguality:
- monolingual
pretty_name: Google-IISc Distant Supervision (GIDS) dataset for distantly-supervised
relation extraction
size_categories:
- 10K<n<100k
source_datasets:
- extended|other
tags:
- relation extraction
task_categories:
- text-classification
task_ids:
- multi-class-classification
dataset_info:
- config_name: gids
features:
- name: sentence
dtype: string
- name: subj_id
dtype: string
- name: obj_id
dtype: string
- name: subj_text
dtype: string
- name: obj_text
dtype: string
- name: relation
dtype:
class_label:
names:
'0': NA
'1': /people/person/education./education/education/institution
'2': /people/person/education./education/education/degree
'3': /people/person/place_of_birth
'4': /people/deceased_person/place_of_death
splits:
- name: train
num_bytes: 5088421
num_examples: 11297
- name: validation
num_bytes: 844784
num_examples: 1864
- name: test
num_bytes: 2568673
num_examples: 5663
download_size: 8941490
dataset_size: 8501878
- config_name: gids_formatted
features:
- name: token
sequence: string
- name: subj_start
dtype: int32
- name: subj_end
dtype: int32
- name: obj_start
dtype: int32
- name: obj_end
dtype: int32
- name: relation
dtype:
class_label:
names:
'0': NA
'1': /people/person/education./education/education/institution
'2': /people/person/education./education/education/degree
'3': /people/person/place_of_birth
'4': /people/deceased_person/place_of_death
splits:
- name: train
num_bytes: 7075362
num_examples: 11297
- name: validation
num_bytes: 1173957
num_examples: 1864
- name: test
num_bytes: 3573706
num_examples: 5663
download_size: 8941490
dataset_size: 11823025
---
# Dataset Card for "gids"
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Repository:** [RE-DS-Word-Attention-Models](https://github.com/SharmisthaJat/RE-DS-Word-Attention-Models/tree/master/Data/GIDS)
- **Paper:** [Improving Distantly Supervised Relation Extraction using Word and Entity Based Attention](https://arxiv.org/abs/1804.06987)
- **Size of downloaded dataset files:** 8.94 MB
- **Size of the generated dataset:** 11.82 MB
### Dataset Summary
The Google-IISc Distant Supervision (GIDS) is a new dataset for distantly-supervised relation extraction.
GIDS is seeded from the human-judged Google relation extraction corpus.
See the paper for full details: [Improving Distantly Supervised Relation Extraction using Word and Entity Based Attention](https://arxiv.org/abs/1804.06987)
Note:
- There is a formatted version that you can load with `datasets.load_dataset('gids', name='gids_formatted')`. This version is tokenized with spaCy, removes the underscores in the entities and provides entity offsets.
### Supported Tasks and Leaderboards
- **Tasks:** Relation Classification
- **Leaderboards:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
The language in the dataset is English.
## Dataset Structure
### Data Instances
#### gids
- **Size of downloaded dataset files:** 8.94 MB
- **Size of the generated dataset:** 8.5 MB
An example of 'train' looks as follows:
```json
{
"sentence": "War as appropriate. Private Alfred James_Smurthwaite Sample. 26614. 2nd Battalion Yorkshire Regiment. Son of Edward James Sample, of North_Ormesby , Yorks. Died 2 April 1917. Aged 29. Born Ormesby, Enlisted Middlesbrough. Buried BUCQUOY ROAD CEMETERY, FICHEUX. Not listed on the Middlesbrough War Memorial Private Frederick Scott. 46449. 4th Battalion Yorkshire Regiment. Son of William and Maria Scott, of 25, Aspinall St., Heywood, Lancs. Born at West Hartlepool. Died 27 May 1918. Aged 24.",
"subj_id": "/m/02qt0sv",
"obj_id": "/m/0fnhl9",
"subj_text": "James_Smurthwaite",
"obj_text": "North_Ormesby",
"relation": 4
}
```
#### gids_formatted
- **Size of downloaded dataset files:** 8.94 MB
- **Size of the generated dataset:** 11.82 MB
An example of 'train' looks as follows:
```json
{
"token": ["announced", "he", "had", "closed", "shop", ".", "Mary", "D.", "Crisp", "Coyle", "opened", "in", "1951", ".", "Stoffey", ",", "a", "Maricopa", "County", "/", "Phoenix", "city", "resident", "and", "longtime", "customer", ",", "bought", "the", "business", "in", "2011", ",", "when", "then", "owners", "were", "facing", "closure", ".", "He", "renovated", "the", "diner", "is", "interior", ",", "increased", "training", "for", "staff", "and", "expanded", "the", "menu", "."],
"subj_start": 6,
"subj_end": 9,
"obj_start": 17,
"obj_end": 22,
"relation": 4
}
```
### Data Fields
The data fields are the same among all splits.
#### gids
- `sentence`: the sentence, a `string` feature.
- `subj_id`: the id of the relation subject mention, a `string` feature.
- `obj_id`: the id of the relation object mention, a `string` feature.
- `subj_text`: the text of the relation subject mention, a `string` feature.
- `obj_text`: the text of the relation object mention, a `string` feature.
- `relation`: the relation label of this instance, an `int` classification label.
```python
{"NA": 0, "/people/person/education./education/education/institution": 1, "/people/person/education./education/education/degree": 2, "/people/person/place_of_birth": 3, "/people/deceased_person/place_of_death": 4}
```
#### gids_formatted
- `token`: the list of tokens of this sentence, obtained with spaCy, a `list` of `string` features.
- `subj_start`: the 0-based index of the start token of the relation subject mention, an `ìnt` feature.
- `subj_end`: the 0-based index of the end token of the relation subject mention, exclusive, an `ìnt` feature.
- `obj_start`: the 0-based index of the start token of the relation object mention, an `ìnt` feature.
- `obj_end`: the 0-based index of the end token of the relation object mention, exclusive, an `ìnt` feature.
- `relation`: the relation label of this instance, an `int` classification label.
```python
{"NA": 0, "/people/person/education./education/education/institution": 1, "/people/person/education./education/education/degree": 2, "/people/person/place_of_birth": 3, "/people/deceased_person/place_of_death": 4}
```
### Data Splits
| | Train | Dev | Test |
|------|-------|------|------|
| GIDS | 11297 | 1864 | 5663 |
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@article{DBLP:journals/corr/abs-1804-06987,
author = {Sharmistha Jat and
Siddhesh Khandelwal and
Partha P. Talukdar},
title = {Improving Distantly Supervised Relation Extraction using Word and
Entity Based Attention},
journal = {CoRR},
volume = {abs/1804.06987},
year = {2018},
url = {http://arxiv.org/abs/1804.06987},
eprinttype = {arXiv},
eprint = {1804.06987},
timestamp = {Fri, 15 Nov 2019 17:16:02 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-1804-06987.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
### Contributions
Thanks to [@phucdev](https://github.com/phucdev) for adding this dataset. | [
-0.5447632074356079,
-0.6640297174453735,
0.30926308035850525,
0.10046951472759247,
-0.1117061972618103,
-0.19872765243053436,
-0.3185359239578247,
-0.4073868989944458,
0.6115526556968689,
0.3372647762298584,
-0.7361536026000977,
-0.8518071174621582,
-0.5763975977897644,
0.0543290227651596... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Cohere/wikipedia-22-12-fr-embeddings | Cohere | 2023-03-22T16:53:41Z | 23 | 5 | null | [
"task_categories:text-retrieval",
"task_ids:document-retrieval",
"annotations_creators:expert-generated",
"multilinguality:multilingual",
"language:fr",
"license:apache-2.0",
"region:us"
] | 2023-03-22T16:53:41Z | 2023-01-14T13:09:16.000Z | 2023-01-14T13:09:16 | ---
annotations_creators:
- expert-generated
language:
- fr
multilinguality:
- multilingual
size_categories: []
source_datasets: []
tags: []
task_categories:
- text-retrieval
license:
- apache-2.0
task_ids:
- document-retrieval
---
# Wikipedia (fr) embedded with cohere.ai `multilingual-22-12` encoder
We encoded [Wikipedia (fr)](https://fr.wikipedia.org) using the [cohere.ai](https://txt.cohere.ai/multilingual/) `multilingual-22-12` embedding model.
To get an overview how this dataset was created and pre-processed, have a look at [Cohere/wikipedia-22-12](https://huggingface.co/datasets/Cohere/wikipedia-22-12).
## Embeddings
We compute for `title+" "+text` the embeddings using our `multilingual-22-12` embedding model, a state-of-the-art model that works for semantic search in 100 languages. If you want to learn more about this model, have a look at [cohere.ai multilingual embedding model](https://txt.cohere.ai/multilingual/).
## Further languages
We provide embeddings of Wikipedia in many different languages:
[ar](https://huggingface.co/datasets/Cohere/wikipedia-22-12-ar-embeddings), [de](https://huggingface.co/datasets/Cohere/wikipedia-22-12-de-embeddings), [en](https://huggingface.co/datasets/Cohere/wikipedia-22-12-en-embeddings), [es](https://huggingface.co/datasets/Cohere/wikipedia-22-12-es-embeddings), [fr](https://huggingface.co/datasets/Cohere/wikipedia-22-12-fr-embeddings), [hi](https://huggingface.co/datasets/Cohere/wikipedia-22-12-hi-embeddings), [it](https://huggingface.co/datasets/Cohere/wikipedia-22-12-it-embeddings), [ja](https://huggingface.co/datasets/Cohere/wikipedia-22-12-ja-embeddings), [ko](https://huggingface.co/datasets/Cohere/wikipedia-22-12-ko-embeddings), [simple english](https://huggingface.co/datasets/Cohere/wikipedia-22-12-simple-embeddings), [zh](https://huggingface.co/datasets/Cohere/wikipedia-22-12-zh-embeddings),
You can find the Wikipedia datasets without embeddings at [Cohere/wikipedia-22-12](https://huggingface.co/datasets/Cohere/wikipedia-22-12).
## Loading the dataset
You can either load the dataset like this:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/wikipedia-22-12-fr-embeddings", split="train")
```
Or you can also stream it without downloading it before:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/wikipedia-22-12-fr-embeddings", split="train", streaming=True)
for doc in docs:
docid = doc['id']
title = doc['title']
text = doc['text']
emb = doc['emb']
```
## Search
A full search example:
```python
#Run: pip install cohere datasets
from datasets import load_dataset
import torch
import cohere
co = cohere.Client(f"<<COHERE_API_KEY>>") # Add your cohere API key from www.cohere.com
#Load at max 1000 documents + embeddings
max_docs = 1000
docs_stream = load_dataset(f"Cohere/wikipedia-22-12-fr-embeddings", split="train", streaming=True)
docs = []
doc_embeddings = []
for doc in docs_stream:
docs.append(doc)
doc_embeddings.append(doc['emb'])
if len(docs) >= max_docs:
break
doc_embeddings = torch.tensor(doc_embeddings)
query = 'Who founded Youtube'
response = co.embed(texts=[query], model='multilingual-22-12')
query_embedding = response.embeddings
query_embedding = torch.tensor(query_embedding)
# Compute dot score between query embedding and document embeddings
dot_scores = torch.mm(query_embedding, doc_embeddings.transpose(0, 1))
top_k = torch.topk(dot_scores, k=3)
# Print results
print("Query:", query)
for doc_id in top_k.indices[0].tolist():
print(docs[doc_id]['title'])
print(docs[doc_id]['text'], "\n")
```
## Performance
You can find performance on the MIRACL dataset (a semantic search evaluation dataset) here: [miracl-en-queries-22-12#performance](https://huggingface.co/datasets/Cohere/miracl-en-queries-22-12#performance) | [
-0.7128458023071289,
-0.6910758018493652,
0.16595904529094696,
0.0328327938914299,
-0.1834392100572586,
-0.09956593066453934,
-0.3353114724159241,
-0.2657078504562378,
0.5978742241859436,
-0.01615077443420887,
-0.5326601266860962,
-0.869637131690979,
-0.647472083568573,
0.22638265788555145... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Cohere/miracl-zh-corpus-22-12 | Cohere | 2023-02-06T11:55:44Z | 23 | 4 | null | [
"task_categories:text-retrieval",
"task_ids:document-retrieval",
"annotations_creators:expert-generated",
"multilinguality:multilingual",
"language:zh",
"license:apache-2.0",
"region:us"
] | 2023-02-06T11:55:44Z | 2023-01-31T13:13:33.000Z | 2023-01-31T13:13:33 | ---
annotations_creators:
- expert-generated
language:
- zh
multilinguality:
- multilingual
size_categories: []
source_datasets: []
tags: []
task_categories:
- text-retrieval
license:
- apache-2.0
task_ids:
- document-retrieval
---
# MIRACL (zh) embedded with cohere.ai `multilingual-22-12` encoder
We encoded the [MIRACL dataset](https://huggingface.co/miracl) using the [cohere.ai](https://txt.cohere.ai/multilingual/) `multilingual-22-12` embedding model.
The query embeddings can be found in [Cohere/miracl-zh-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-zh-queries-22-12) and the corpus embeddings can be found in [Cohere/miracl-zh-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-zh-corpus-22-12).
For the orginal datasets, see [miracl/miracl](https://huggingface.co/datasets/miracl/miracl) and [miracl/miracl-corpus](https://huggingface.co/datasets/miracl/miracl-corpus).
Dataset info:
> MIRACL 🌍🙌🌏 (Multilingual Information Retrieval Across a Continuum of Languages) is a multilingual retrieval dataset that focuses on search across 18 different languages, which collectively encompass over three billion native speakers around the world.
>
> The corpus for each language is prepared from a Wikipedia dump, where we keep only the plain text and discard images, tables, etc. Each article is segmented into multiple passages using WikiExtractor based on natural discourse units (e.g., `\n\n` in the wiki markup). Each of these passages comprises a "document" or unit of retrieval. We preserve the Wikipedia article title of each passage.
## Embeddings
We compute for `title+" "+text` the embeddings using our `multilingual-22-12` embedding model, a state-of-the-art model that works for semantic search in 100 languages. If you want to learn more about this model, have a look at [cohere.ai multilingual embedding model](https://txt.cohere.ai/multilingual/).
## Loading the dataset
In [miracl-zh-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-zh-corpus-22-12) we provide the corpus embeddings. Note, depending on the selected split, the respective files can be quite large.
You can either load the dataset like this:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/miracl-zh-corpus-22-12", split="train")
```
Or you can also stream it without downloading it before:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/miracl-zh-corpus-22-12", split="train", streaming=True)
for doc in docs:
docid = doc['docid']
title = doc['title']
text = doc['text']
emb = doc['emb']
```
## Search
Have a look at [miracl-zh-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-zh-queries-22-12) where we provide the query embeddings for the MIRACL dataset.
To search in the documents, you must use **dot-product**.
And then compare this query embeddings either with a vector database (recommended) or directly computing the dot product.
A full search example:
```python
# Attention! For large datasets, this requires a lot of memory to store
# all document embeddings and to compute the dot product scores.
# Only use this for smaller datasets. For large datasets, use a vector DB
from datasets import load_dataset
import torch
#Load documents + embeddings
docs = load_dataset(f"Cohere/miracl-zh-corpus-22-12", split="train")
doc_embeddings = torch.tensor(docs['emb'])
# Load queries
queries = load_dataset(f"Cohere/miracl-zh-queries-22-12", split="dev")
# Select the first query as example
qid = 0
query = queries[qid]
query_embedding = torch.tensor(queries['emb'])
# Compute dot score between query embedding and document embeddings
dot_scores = torch.mm(query_embedding, doc_embeddings.transpose(0, 1))
top_k = torch.topk(dot_scores, k=3)
# Print results
print("Query:", query['query'])
for doc_id in top_k.indices[0].tolist():
print(docs[doc_id]['title'])
print(docs[doc_id]['text'])
```
You can get embeddings for new queries using our API:
```python
#Run: pip install cohere
import cohere
co = cohere.Client(f"{api_key}") # You should add your cohere API Key here :))
texts = ['my search query']
response = co.embed(texts=texts, model='multilingual-22-12')
query_embedding = response.embeddings[0] # Get the embedding for the first text
```
## Performance
In the following table we compare the cohere multilingual-22-12 model with Elasticsearch version 8.6.0 lexical search (title and passage indexed as independent fields). Note that Elasticsearch doesn't support all languages that are part of the MIRACL dataset.
We compute nDCG@10 (a ranking based loss), as well as hit@3: Is at least one relevant document in the top-3 results. We find that hit@3 is easier to interpret, as it presents the number of queries for which a relevant document is found among the top-3 results.
Note: MIRACL only annotated a small fraction of passages (10 per query) for relevancy. Especially for larger Wikipedias (like English), we often found many more relevant passages. This is know as annotation holes. Real nDCG@10 and hit@3 performance is likely higher than depicted.
| Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 | ES 8.6.0 nDCG@10 | ES 8.6.0 acc@3 |
|---|---|---|---|---|
| miracl-ar | 64.2 | 75.2 | 46.8 | 56.2 |
| miracl-bn | 61.5 | 75.7 | 49.2 | 60.1 |
| miracl-de | 44.4 | 60.7 | 19.6 | 29.8 |
| miracl-en | 44.6 | 62.2 | 30.2 | 43.2 |
| miracl-es | 47.0 | 74.1 | 27.0 | 47.2 |
| miracl-fi | 63.7 | 76.2 | 51.4 | 61.6 |
| miracl-fr | 46.8 | 57.1 | 17.0 | 21.6 |
| miracl-hi | 50.7 | 62.9 | 41.0 | 48.9 |
| miracl-id | 44.8 | 63.8 | 39.2 | 54.7 |
| miracl-ru | 49.2 | 66.9 | 25.4 | 36.7 |
| **Avg** | 51.7 | 67.5 | 34.7 | 46.0 |
Further languages (not supported by Elasticsearch):
| Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 |
|---|---|---|
| miracl-fa | 44.8 | 53.6 |
| miracl-ja | 49.0 | 61.0 |
| miracl-ko | 50.9 | 64.8 |
| miracl-sw | 61.4 | 74.5 |
| miracl-te | 67.8 | 72.3 |
| miracl-th | 60.2 | 71.9 |
| miracl-yo | 56.4 | 62.2 |
| miracl-zh | 43.8 | 56.5 |
| **Avg** | 54.3 | 64.6 |
| [
-0.6337466239929199,
-0.7860816121101379,
0.36231157183647156,
0.19228750467300415,
-0.09558746218681335,
-0.08747369050979614,
-0.31093519926071167,
-0.48202773928642273,
0.5483473539352417,
0.21191829442977905,
-0.605864942073822,
-1.0422202348709106,
-0.6617672443389893,
0.3027292490005... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
BuroIdentidadDigital/recibos_cfe | BuroIdentidadDigital | 2023-11-08T13:21:36Z | 23 | 1 | null | [
"license:c-uda",
"region:us"
] | 2023-11-08T13:21:36Z | 2023-02-09T17:35:09.000Z | 2023-02-09T17:35:09 | ---
license: c-uda
---
| [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Piro17/dataset-affecthqnet-fer2013 | Piro17 | 2023-02-10T14:13:09Z | 23 | 0 | null | [
"region:us"
] | 2023-02-10T14:13:09Z | 2023-02-10T14:07:09.000Z | 2023-02-10T14:07:09 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': anger
'1': disgust
'2': fear
'3': happy
'4': neutral
'5': sad
'6': surprise
splits:
- name: train
num_bytes: 106887329.048
num_examples: 56532
download_size: 7975090261
dataset_size: 106887329.048
---
# Dataset Card for "dataset-affecthqnet-fer2013"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.564916729927063,
-0.21950310468673706,
0.1534249186515808,
0.4450785517692566,
-0.02531379461288452,
-0.2048967033624649,
0.4546227753162384,
-0.13596485555171967,
1.032752275466919,
0.31347817182540894,
-1.0194348096847534,
-0.5311287641525269,
-0.5433595776557922,
-0.03705420717597008... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
jungsungmoon/Korean_dialog | jungsungmoon | 2023-02-21T02:06:59Z | 23 | 2 | null | [
"license:unknown",
"region:us"
] | 2023-02-21T02:06:59Z | 2023-02-21T01:46:53.000Z | 2023-02-21T01:46:53 | ---
license: unknown
---
| [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
thewall/jolma | thewall | 2023-03-23T09:43:40Z | 23 | 0 | null | [
"license:openrail",
"region:us"
] | 2023-03-23T09:43:40Z | 2023-03-11T06:02:15.000Z | 2023-03-11T06:02:15 | ---
license: openrail
---
| [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
trondizzy/uk_en_combined_sets | trondizzy | 2023-03-12T09:08:56Z | 23 | 0 | null | [
"task_categories:translation",
"size_categories:100K<n<1M",
"language:en",
"language:uk",
"license:cc",
"region:us"
] | 2023-03-12T09:08:56Z | 2023-03-12T05:42:26.000Z | 2023-03-12T05:42:26 | ---
license: cc
task_categories:
- translation
language:
- en
- uk
size_categories:
- 100K<n<1M
--- | [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
HuggingFaceGECLM/REDDIT_comments | HuggingFaceGECLM | 2023-03-17T07:52:51Z | 23 | 6 | null | [
"task_categories:text-generation",
"task_ids:dialogue-modeling",
"task_ids:language-modeling",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10B<n<100B",
"language:en",
"reddit",
"social-media",
"arxiv:2001.08435",
"region:us"
... | 2023-03-17T07:52:51Z | 2023-03-15T14:14:58.000Z | 2023-03-15T14:14:58 | ---
dataset_info:
features:
- name: archived
dtype: string
- name: author
dtype: string
- name: author_fullname
dtype: string
- name: body
dtype: string
- name: comment_type
dtype: string
- name: controversiality
dtype: string
- name: created_utc
dtype: string
- name: edited
dtype: string
- name: gilded
dtype: string
- name: id
dtype: string
- name: link_id
dtype: string
- name: locked
dtype: string
- name: name
dtype: string
- name: parent_id
dtype: string
- name: permalink
dtype: string
- name: retrieved_on
dtype: string
- name: score
dtype: string
- name: subreddit_id
dtype: string
- name: subreddit_name_prefixed
dtype: string
- name: subreddit_type
dtype: string
- name: total_awards_received
dtype: string
splits:
- name: programming
num_bytes: 3466623746
num_examples: 7503347
- name: tifu
num_bytes: 4761338653
num_examples: 12738669
- name: explainlikeimfive
num_bytes: 8451732573
num_examples: 16392814
- name: WritingPrompts
num_bytes: 4651591771
num_examples: 4436210
- name: changemyview
num_bytes: 8603031915
num_examples: 11600073
- name: LifeProTips
num_bytes: 5272994396
num_examples: 12829459
- name: todayilearned
num_bytes: 22655655241
num_examples: 60199778
- name: science
num_bytes: 7069809765
num_examples: 18112884
- name: askscience
num_bytes: 3144754665
num_examples: 6286702
- name: ifyoulikeblank
num_bytes: 547200329
num_examples: 1332211
- name: Foodforthought
num_bytes: 308377128
num_examples: 567900
- name: IWantToLearn
num_bytes: 408331672
num_examples: 745543
- name: bestof
num_bytes: 2003718831
num_examples: 4347522
- name: IAmA
num_bytes: 9380094090
num_examples: 25778822
- name: socialskills
num_bytes: 1000014402
num_examples: 1842733
- name: relationship_advice
num_bytes: 22298879735
num_examples: 38937398
- name: philosophy
num_bytes: 1494947876
num_examples: 2391695
- name: YouShouldKnow
num_bytes: 1165617658
num_examples: 2639265
- name: history
num_bytes: 1457852402
num_examples: 2962043
- name: books
num_bytes: 4562689426
num_examples: 10187495
- name: Showerthoughts
num_bytes: 13259109532
num_examples: 34123213
- name: personalfinance
num_bytes: 9484869588
num_examples: 18361314
- name: buildapc
num_bytes: 9801044390
num_examples: 21761801
- name: EatCheapAndHealthy
num_bytes: 853462012
num_examples: 1821897
- name: boardgames
num_bytes: 3131627378
num_examples: 6328926
- name: malefashionadvice
num_bytes: 2928017882
num_examples: 7712258
- name: femalefashionadvice
num_bytes: 1619784736
num_examples: 3262969
- name: scifi
num_bytes: 888152056
num_examples: 2193741
- name: Fantasy
num_bytes: 2285934538
num_examples: 4566639
- name: Games
num_bytes: 10396813188
num_examples: 23373965
- name: bodyweightfitness
num_bytes: 794549854
num_examples: 1613634
- name: SkincareAddiction
num_bytes: 3421122597
num_examples: 5660550
- name: podcasts
num_bytes: 464773126
num_examples: 943266
- name: suggestmeabook
num_bytes: 1842944304
num_examples: 3492937
- name: AskHistorians
num_bytes: 2244587909
num_examples: 2714353
- name: gaming
num_bytes: 28374513722
num_examples: 85729253
- name: DIY
num_bytes: 2113533684
num_examples: 4489265
- name: sports
num_bytes: 2230129132
num_examples: 6470079
- name: space
num_bytes: 3081499208
num_examples: 7896182
- name: gadgets
num_bytes: 1683252868
num_examples: 4104833
- name: Documentaries
num_bytes: 1852644771
num_examples: 4051474
- name: GetMotivated
num_bytes: 1211761267
num_examples: 3221980
- name: UpliftingNews
num_bytes: 2003149025
num_examples: 4741948
- name: technology
num_bytes: 10826871436
num_examples: 25404699
- name: Fitness
num_bytes: 6191132755
num_examples: 14319856
- name: travel
num_bytes: 1740556350
num_examples: 3806755
- name: lifehacks
num_bytes: 626791812
num_examples: 1799437
- name: Damnthatsinteresting
num_bytes: 6376694618
num_examples: 15643554
- name: gardening
num_bytes: 1825313940
num_examples: 4568468
- name: mildlyinteresting
num_bytes: 9079894206
num_examples: 26436769
download_size: 109177016105
dataset_size: 255339788158
annotations_creators:
- no-annotation
language:
- en
language_creators:
- found
license: []
multilinguality:
- monolingual
pretty_name: Reddit comments
size_categories:
- 10B<n<100B
source_datasets: []
tags:
- reddit
- social-media
task_categories:
- text-generation
task_ids:
- dialogue-modeling
- language-modeling
---
# Dataset Card for "REDDIT_comments"
## Dataset Description
- **Homepage:**
- **Paper: https://arxiv.org/abs/2001.08435**
### Dataset Summary
Comments of 50 high-quality subreddits, extracted from the REDDIT PushShift data dumps (from 2006 to Jan 2023).
### Supported Tasks
These comments can be used for text generation and language modeling, as well as dialogue modeling.
## Dataset Structure
### Data Splits
Each split corresponds to a specific subreddit in the following list: "tifu", "explainlikeimfive", "WritingPrompts", "changemyview", "LifeProTips", "todayilearned", "science", "askscience", "ifyoulikeblank", "Foodforthought", "IWantToLearn", "bestof", "IAmA", "socialskills", "relationship_advice", "philosophy", "YouShouldKnow", "history", "books", "Showerthoughts", "personalfinance", "buildapc", "EatCheapAndHealthy", "boardgames", "malefashionadvice", "femalefashionadvice", "scifi", "Fantasy", "Games", "bodyweightfitness", "SkincareAddiction", "podcasts", "suggestmeabook", "AskHistorians", "gaming", "DIY", "mildlyinteresting", "sports", "space", "gadgets", "Documentaries", "GetMotivated", "UpliftingNews", "technology", "Fitness", "travel", "lifehacks", "Damnthatsinteresting", "gardening", "programming"
## Dataset Creation
### Curation Rationale
All the information fields have been cast to string, as their format change through time from one dump to the following. A reduced number of keys have been kept: "archived", "author", "author_fullname", "body", "comment_type", "controversiality", "created_utc", "edited", "gilded", "id", "link_id", "locked", "name", "parent_id", "permalink", "retrieved_on", "score", "subreddit", "subreddit_id", "subreddit_name_prefixed", "subreddit_type", "total_awards_received".
### Source Data
The [Reddit PushShift data dumps](https://files.pushshift.io/reddit/) are part of a data collection effort which crawls Reddit at regular intervals, to extract and keep all its data.
#### Initial Data Collection and Normalization
See the paper.
#### Who are the source language producers?
Redditors are mostly young (65% below 30), male (70%), and American (50% of the site).
### Personal and Sensitive Information
The data contains Redditor's usernames associated to their content.
## Considerations for Using the Data
This dataset should be anonymized before any processing.
Though the subreddits selected are considered as being of higher quality, they can still reflect what you can find on the internet in terms of expressions of biases and toxicity.
### Contributions
Thanks to [@clefourrier](https://github.com/clefourrier) for adding this dataset. | [
-0.5571550726890564,
-0.764799177646637,
0.30486971139907837,
0.14595073461532593,
-0.42283767461776733,
0.17657782137393951,
-0.39302730560302734,
-0.12158465385437012,
0.5674823522567749,
0.5794950127601624,
-0.9758439064025879,
-0.826641321182251,
-0.7109535932540894,
0.3907043337821960... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
AbderrahmanSkiredj1/IADD_darija_sentences | AbderrahmanSkiredj1 | 2023-03-24T16:28:41Z | 23 | 0 | null | [
"region:us"
] | 2023-03-24T16:28:41Z | 2023-03-24T16:28:39.000Z | 2023-03-24T16:28:39 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 449890
num_examples: 7213
download_size: 218476
dataset_size: 449890
---
# Dataset Card for "IADD_darija_sentences"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6361135840415955,
-0.5944143533706665,
0.17970088124275208,
0.4255814254283905,
-0.2010050117969513,
-0.36386415362358093,
0.059361882507801056,
-0.04858379065990448,
0.7731661796569824,
0.7095191478729248,
-0.7477059960365295,
-0.8472921252250671,
-0.7942864894866943,
-0.08547200262546... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
open-source-metrics/preprocessed_issues | open-source-metrics | 2023-11-23T14:47:45Z | 23 | 0 | null | [
"region:us"
] | 2023-11-23T14:47:45Z | 2023-03-24T22:49:09.000Z | 2023-03-24T22:49:09 | ---
dataset_info:
features:
- name: huggingface_hub
dtype: int64
- name: text_generation_inference
dtype: int64
- name: safetensors
dtype: int64
- name: tokenizers
dtype: int64
- name: transformers
dtype: int64
- name: diffusers
dtype: int64
- name: accelerate
dtype: int64
- name: chat_ui
dtype: int64
- name: candle
dtype: int64
- name: gradio
dtype: int64
- name: evaluate
dtype: int64
- name: pytorch_image_models
dtype: int64
- name: peft
dtype: int64
- name: optimum
dtype: int64
- name: datasets
dtype: int64
- name: hub_docs
dtype: int64
- name: langchain
dtype: int64
- name: stable_diffusion_webui
dtype: int64
- name: tensorflow
dtype: int64
- name: pytorch
dtype: int64
- name: openai_python
dtype: int64
- name: day
dtype: string
splits:
- name: raw
num_bytes: 19652
num_examples: 101
- name: wow
num_bytes: 19844
num_examples: 102
- name: eom
num_bytes: 19652
num_examples: 101
- name: eom_wow
num_bytes: 19844
num_examples: 102
download_size: 76401
dataset_size: 78992
configs:
- config_name: default
data_files:
- split: raw
path: data/raw-*
- split: wow
path: data/wow-*
- split: eom
path: data/eom-*
- split: eom_wow
path: data/eom_wow-*
---
# Dataset Card for "preprocessed_issues"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6352344155311584,
-0.43391847610473633,
0.3860571086406708,
0.46323561668395996,
-0.11402431130409241,
0.09525168687105179,
0.073766328394413,
-0.16852514445781708,
0.9205851554870605,
0.5766665935516357,
-0.864412784576416,
-0.7903054356575012,
-0.48112377524375916,
-0.1338403075933456... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
mstz/ipums | mstz | 2023-04-17T09:54:47Z | 23 | 0 | null | [
"task_categories:tabular-classification",
"language:en",
"ipums",
"tabular_classification",
"binary_classification",
"UCI",
"region:us"
] | 2023-04-17T09:54:47Z | 2023-04-17T08:46:50.000Z | 2023-04-17T08:46:50 | ---
language:
- en
tags:
- ipums
- tabular_classification
- binary_classification
- UCI
pretty_name: Ipums
task_categories: # Full list at https://github.com/huggingface/hub-docs/blob/main/js/src/lib/interfaces/Types.ts
- tabular-classification
configs:
- ipums
---
# Ipums
The [Ipums dataset](https://archive-beta.ics.uci.edu/dataset/127/ipums+census+database) from the [UCI repository](https://archive-beta.ics.uci.edu/).
| [
-0.6777119040489197,
0.28261587023735046,
0.12167443335056305,
0.008859442546963692,
-0.1813691109418869,
0.09922033548355103,
0.479451984167099,
0.10014692693948746,
0.7029408812522888,
0.8322970271110535,
-0.5660287141799927,
-0.6540988683700562,
-0.571231484413147,
-0.09404069930315018,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
alpayariyak/IAM_Sentences_LLaVA | alpayariyak | 2023-05-19T22:04:20Z | 23 | 0 | null | [
"region:us"
] | 2023-05-19T22:04:20Z | 2023-05-19T21:46:41.000Z | 2023-05-19T21:46:41 | ---
dataset_info:
features:
- name: image
dtype: image
- name: id
dtype: string
- name: conversations
dtype: string
splits:
- name: train
num_bytes: 1053875995.077
num_examples: 5663
download_size: 1128902513
dataset_size: 1053875995.077
---
# Dataset Card for "IAM_Sentences_LLaVA"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.3481599688529968,
-0.4971083700656891,
0.3037846088409424,
0.4098030924797058,
-0.19033974409103394,
-0.2070833444595337,
0.07038180530071259,
-0.1522379219532013,
0.8599773645401001,
0.6183090806007385,
-0.83301842212677,
-0.7199834585189819,
-0.6487122774124146,
-0.07229381799697876,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
arubenruben/cnn_dailymail_azure_pt_pt | arubenruben | 2023-06-06T11:08:32Z | 23 | 2 | null | [
"task_categories:summarization",
"task_categories:translation",
"language:pt",
"Machine Translation",
"region:us"
] | 2023-06-06T11:08:32Z | 2023-06-06T11:02:22.000Z | 2023-06-06T11:02:22 | ---
dataset_info:
features:
- name: document
dtype: string
- name: summary
dtype: string
splits:
- name: train
num_bytes: 33317736
num_examples: 7729
- name: validation
num_bytes: 14690610
num_examples: 3810
- name: test
num_bytes: 33051715
num_examples: 7298
download_size: 48224108
dataset_size: 81060061
task_categories:
- summarization
- translation
language:
- pt
tags:
- Machine Translation
pretty_name: Portuguese CNN-Dailymail-Azure
---
# Dataset Card for "cnn_dailymail_azure_pt_pt"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.35713037848472595,
-0.2624604403972626,
0.12467079609632492,
0.38991302251815796,
-0.4968164265155792,
0.11426056921482086,
0.28499364852905273,
-0.036163561046123505,
0.6595224142074585,
0.4464585483074188,
-0.8707930445671082,
-1.0397547483444214,
-0.7970883846282959,
-0.2119494527578... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
vwxyzjn/lm-human-preferences | vwxyzjn | 2023-09-01T02:02:15Z | 23 | 0 | null | [
"license:mit",
"region:us"
] | 2023-09-01T02:02:15Z | 2023-06-13T00:20:43.000Z | 2023-06-13T00:20:43 | ---
license: mit
---
| [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.