id stringlengths 2 115 | lastModified stringlengths 24 24 | tags list | author stringlengths 2 42 ⌀ | description stringlengths 0 68.7k ⌀ | citation stringlengths 0 10.7k ⌀ | cardData null | likes int64 0 3.55k | downloads int64 0 10.1M | card stringlengths 0 1.01M |
|---|---|---|---|---|---|---|---|---|---|
ChanHE/testtestkan | 2023-10-01T09:42:04.000Z | [
"region:us"
] | ChanHE | null | null | null | 0 | 18 | Entry not found |
junaid20/question_answer | 2023-09-15T14:11:39.000Z | [
"license:other",
"region:us"
] | junaid20 | null | null | null | 0 | 18 | ---
license: other
---
|
TuningAI/Startups_V2 | 2023-09-15T14:01:40.000Z | [
"task_categories:question-answering",
"task_categories:text-generation",
"language:en",
"license:apache-2.0",
"startups ",
"ecommerce",
"tax",
"law",
"region:us"
] | TuningAI | null | null | null | 3 | 18 | ---
license: apache-2.0
task_categories:
- question-answering
- text-generation
language:
- en
tags:
- 'startups '
- ecommerce
- tax
- law
--- |
adamo1139/basic_economics_questions_ts_test_1 | 2023-09-17T12:06:03.000Z | [
"region:us"
] | adamo1139 | null | null | null | 0 | 18 | Synthethic Question & Answer dataset trained on a corpus of the book Basic Economics by Thomas Sowell.
Formating could be improved, as model trained on this dataset write \n tokens as words and not as newline, so I guess it gets tokenized in a way different from expectations.
Note that prompt format isn't very consistent in every sample.
Spicyboros 7B gguf was used as a model that generated synthetic responses, so it was all generated locally without leaving the device, as opposed to how commonly GPT 3.5 Turbo or GPT 4 would be used for the purpose. |
Ayansk11/text_format | 2023-09-17T10:10:45.000Z | [
"region:us"
] | Ayansk11 | null | null | null | 0 | 18 | Entry not found |
MathiasFoster/whisper-v4 | 2023-09-19T00:53:11.000Z | [
"region:us"
] | MathiasFoster | null | null | null | 0 | 18 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: audio
dtype: audio
- name: transcription
dtype: string
splits:
- name: train
num_bytes: 19948406.0
num_examples: 324
- name: test
num_bytes: 607133.0
num_examples: 10
download_size: 20047841
dataset_size: 20555539.0
---
# Dataset Card for "whisper-v4"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
atulsinghphd/demo | 2023-09-22T17:45:59.000Z | [
"license:openrail",
"region:us"
] | atulsinghphd | null | null | null | 0 | 18 | ---
license: openrail
dataset_info:
features:
- name: text
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 1334450.4
num_examples: 400
- name: test
num_bytes: 333612.6
num_examples: 100
download_size: 415248
dataset_size: 1668063.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
whermens/tmp2 | 2023-09-20T13:01:06.000Z | [
"license:unknown",
"region:us"
] | whermens | null | null | null | 0 | 18 | ---
license: unknown
---
|
SminC/pokemon_caption_data | 2023-09-21T11:09:37.000Z | [
"region:us"
] | SminC | null | null | null | 0 | 18 | ---
dataset_info:
features:
- name: original_image
dtype: image
- name: edit_prompt
dtype: string
- name: colored_image
dtype: image
splits:
- name: train
num_bytes: 25225724.0
num_examples: 303
download_size: 25174197
dataset_size: 25225724.0
---
# Dataset Card for "pokemon_caption_data"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Doub7e/coco_captions_T5 | 2023-09-21T15:34:12.000Z | [
"region:us"
] | Doub7e | null | null | null | 0 | 18 | ---
dataset_info:
features:
- name: image
dtype: image
- name: blip_caption_beam_5
dtype: string
- name: T5_last_hidden_states
sequence:
sequence:
sequence: float32
- name: sentences_raw
sequence: string
splits:
- name: train
num_bytes: 416620666.0
num_examples: 5000
download_size: 445433251
dataset_size: 416620666.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "coco_captions_T5"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
thanhduycao/soict_train_dataset_with_wer | 2023-09-21T15:43:02.000Z | [
"region:us"
] | thanhduycao | null | null | null | 0 | 18 | Entry not found |
spacemanidol/dset | 2023-09-26T19:09:18.000Z | [
"region:us"
] | spacemanidol | null | null | 0 | 18 | Entry not found | |
Falah/neo-pop_surrealism | 2023-09-22T07:37:31.000Z | [
"region:us"
] | Falah | null | null | null | 0 | 18 | ---
dataset_info:
features:
- name: prompts
dtype: string
splits:
- name: train
num_bytes: 1590730
num_examples: 10000
download_size: 18332
dataset_size: 1590730
---
# Dataset Card for "neo-pop_surrealism"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
aditijha/instruct_v1_5k | 2023-09-22T21:16:56.000Z | [
"region:us"
] | aditijha | null | null | null | 0 | 18 | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: response
dtype: string
splits:
- name: train
num_bytes: 3688239.8415107066
num_examples: 5000
download_size: 1942992
dataset_size: 3688239.8415107066
---
# Dataset Card for "instruct_v1_5k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
anujsahani01/StarChat_tokenized | 2023-09-23T20:17:24.000Z | [
"region:us"
] | anujsahani01 | null | null | null | 0 | 18 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
splits:
- name: train
num_bytes: 553543492
num_examples: 42541
- name: test
num_bytes: 185056664
num_examples: 14222
- name: validation
num_bytes: 527077084
num_examples: 40507
download_size: 306645974
dataset_size: 1265677240
---
# Dataset Card for "StarChat_tokenized"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
minh21/COVID-QA-sentence-transformer | 2023-09-24T01:06:46.000Z | [
"region:us"
] | minh21 | null | null | null | 0 | 18 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: question
dtype: string
- name: positive
dtype: string
- name: negative
dtype: string
splits:
- name: train
num_bytes: 30935944
num_examples: 14588
- name: test
num_bytes: 3865038
num_examples: 1823
- name: validation
num_bytes: 3875086
num_examples: 1824
download_size: 16115660
dataset_size: 38676068
---
# Dataset Card for "COVID-QA-sentence-transformer"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
JCAI2000/LargerImagesLabelled | 2023-09-25T10:18:43.000Z | [
"region:us"
] | JCAI2000 | null | null | null | 0 | 18 | ---
dataset_info:
features:
- name: pixel_values
dtype: image
- name: label
dtype: image
splits:
- name: train
num_bytes: 513933217.0
num_examples: 42
download_size: 182096737
dataset_size: 513933217.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "LargerImagesLabelled"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ricardosantoss/top10 | 2023-09-25T11:43:38.000Z | [
"region:us"
] | ricardosantoss | null | null | null | 0 | 18 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: TEXT
dtype: string
- name: ICD9_CODE
sequence: string
splits:
- name: train
num_bytes: 295026309
num_examples: 31478
- name: test
num_bytes: 37572145
num_examples: 4000
- name: validation
num_bytes: 37192991
num_examples: 4000
download_size: 206008521
dataset_size: 369791445
---
# Dataset Card for "top10"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
erhwenkuo/alpaca-data-gpt4-chinese-zhtw | 2023-09-26T14:03:00.000Z | [
"task_categories:text-generation",
"task_categories:conversational",
"task_categories:question-answering",
"size_categories:10K<n<100K",
"language:zh",
"gpt4",
"alpaca",
"instruction-finetuning",
"arxiv:2304.03277",
"region:us"
] | erhwenkuo | null | null | null | 1 | 18 | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 33817106
num_examples: 52049
download_size: 22275874
dataset_size: 33817106
task_categories:
- text-generation
- conversational
- question-answering
language:
- zh
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
tags:
- gpt4
- alpaca
- instruction-finetuning
pretty_name: ' alpaca-data-gpt4-chinese-zhtw'
size_categories:
- 10K<n<100K
---
# Dataset Card for "alpaca-data-gpt4-chinese-zhtw"
This dataset contains Chinese (zh-tw) Instruction-Following generated by GPT-4 using Alpaca prompts for fine-tuning LLMs.
The dataset was originaly shared in this repository: https://github.com/Instruction-Tuning-with-GPT-4/GPT-4-LLM. This dataset is a translation from English to Chinese.
## Dataset Description
- **Homepage:** https://instruction-tuning-with-gpt-4.github.io
- **Repository:** https://github.com/Instruction-Tuning-with-GPT-4/GPT-4-LLM
- **Paper:** https://arxiv.org/abs/2304.03277
## Dataset structure
It contains 52K instruction-following data generated by GPT-4 using the same prompts as in Alpaca.
The dataset has the same format as Alpaca data, except the output is generated by GPT-4:
- `instruction`: `str`, describes the task the model should perform. Each of the 52K instructions is unique.
- `input`: `str`, optional context or input for the task.
- `output`: `str`, the answer to the instruction as generated by `GPT-4`.
- `text`: `str`, all the previous fields concatenated together, plus the same prompt used in Alpaca at the beginnig.
## Difference with the original Alpaca dataset
The original Alpaca dataset used text-davinci-003 to complete the prompts. This dataset uses those same prompts, but generating the completions with GPT-4. Thus, in general, the responses are of higher quality and lenght. Here is an example:
#### Example from Alpaca-GPT4:
```bash
{'instruction': 'Identify the odd one out.',
'input': 'Twitter, Instagram, Telegram',
'output': 'The odd one out is Telegram. Twitter and Instagram are social media platforms mainly for sharing information, images and videos while Telegram is a cloud-based instant messaging and voice-over-IP service.',
'text': 'Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.\n\n### Instruction:\nIdentify the odd one out.\n\n### Input:\nTwitter, Instagram, Telegram\n\n### Response:\nThe odd one out is Telegram. Twitter and Instagram are social media platforms mainly for sharing information, images and videos while Telegram is a cloud-based instant messaging and voice-over-IP service.'}
```
#### Same example from original Alpaca:
```bash
{'instruction': 'Identify the odd one out.',
'input': 'Twitter, Instagram, Telegram',
'output': 'Telegram',
'text': 'Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.\n\n### Instruction:\nIdentify the odd one out.\n\n### Input:\nTwitter, Instagram, Telegram\n\n### Response:\nTelegram'}
```
## Licensing Information
The dataset is available under the [Creative Commons NonCommercial (CC BY-NC 4.0)](https://creativecommons.org/licenses/by-nc/4.0/legalcode). |
tyzhu/squad_no_title_v4_train_10_eval_10 | 2023-09-26T14:59:16.000Z | [
"region:us"
] | tyzhu | null | null | null | 0 | 18 | ---
dataset_info:
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: text
dtype: string
- name: answer_start
dtype: int32
- name: context_id
dtype: string
- name: inputs
dtype: string
- name: targets
dtype: string
splits:
- name: train
num_bytes: 203084
num_examples: 138
- name: validation
num_bytes: 48707
num_examples: 50
download_size: 64510
dataset_size: 251791
---
# Dataset Card for "squad_no_title_v4_train_10_eval_10"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
lowem1/mimic_radiology_ocr | 2023-09-27T15:47:13.000Z | [
"region:us"
] | lowem1 | null | null | null | 0 | 18 | ---
dataset_info:
features:
- name: tag
dtype: string
- name: ocr_data
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 2270338
num_examples: 1000
download_size: 1178315
dataset_size: 2270338
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "mimic_radiology_ocr"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
tyzhu/squad_wrong_rare_v4_train_30_eval_10 | 2023-09-27T16:18:19.000Z | [
"region:us"
] | tyzhu | null | null | null | 0 | 18 | ---
dataset_info:
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: text
dtype: string
- name: answer_start
dtype: int32
- name: context_id
dtype: string
- name: inputs
dtype: string
- name: targets
dtype: string
splits:
- name: train
num_bytes: 546548
num_examples: 368
- name: validation
num_bytes: 50213
num_examples: 50
download_size: 105441
dataset_size: 596761
---
# Dataset Card for "squad_wrong_rare_v4_train_30_eval_10"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Illia56/Military-Aircraft-Detection | 2023-09-28T05:40:58.000Z | [
"task_categories:object-detection",
"task_categories:zero-shot-classification",
"task_categories:zero-shot-image-classification",
"task_categories:depth-estimation",
"task_categories:image-classification",
"task_categories:image-segmentation",
"size_categories:1M<n<10M",
"license:apache-2.0",
"Image... | Illia56 | null | null | null | 1 | 18 | ---
license: apache-2.0
task_categories:
- object-detection
- zero-shot-classification
- zero-shot-image-classification
- depth-estimation
- image-classification
- image-segmentation
tags:
- Image
- 'Computer Vision '
- Military
- Aviation
- Engineering
size_categories:
- 1M<n<10M
---
Dataset for object detection of military aircraft
bounding box in PASCAL VOC format (xmin, ymin, xmax, ymax)
43 aircraft types
(A-10, A-400M, AG-600, AV-8B, B-1, B-2, B-52 Be-200, C-130, C-17, C-2, C-5, E-2, E-7, EF-2000, F-117, F-14, F-15, F-16, F/A-18, F-22, F-35, F-4, J-20, JAS-39, MQ-9, Mig-31, Mirage2000, P-3(CP-140), RQ-4, Rafale, SR-71(may contain A-12), Su-34, Su-57, Tornado, Tu-160, Tu-95(Tu-142), U-2, US-2(US-1A Kai), V-22, Vulcan, XB-70, YF-23)
Please let me know if you find wrong labels or duplicated images. |
loremipsum3658/adj_extension | 2023-09-28T17:03:46.000Z | [
"region:us"
] | loremipsum3658 | null | null | null | 0 | 18 | ---
dataset_info:
features:
- name: data
dtype: string
- name: titulo
dtype: string
- name: andamento
dtype: string
- name: nup
dtype: 'null'
- name: classificacao_andamento
sequence: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 71124
num_examples: 135
download_size: 23610
dataset_size: 71124
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "adj_extension"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
junaid20/infogen_labs | 2023-09-29T10:01:08.000Z | [
"region:us"
] | junaid20 | null | null | null | 0 | 18 | Entry not found |
mHossain/sentiNob_v1 | 2023-09-30T05:50:15.000Z | [
"region:us"
] | mHossain | null | null | null | 0 | 18 | Entry not found |
sitloboi2012/rvl_cdip_small_dataset | 2023-10-01T08:17:51.000Z | [
"region:us"
] | sitloboi2012 | null | null | null | 0 | 18 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype: string
splits:
- name: train
num_bytes: 1746183.0
num_examples: 15
download_size: 1643991
dataset_size: 1746183.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "rvl_cdip_small_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
CDS-GROUP/LFR | 2023-10-01T13:36:26.000Z | [
"task_categories:graph-ml",
"size_categories:n<1K",
"language:en",
"license:gpl-3.0",
"biology",
"region:us"
] | CDS-GROUP | null | null | null | 0 | 18 | ---
license: gpl-3.0
task_categories:
- graph-ml
language:
- en
tags:
- biology
pretty_name: LFR
size_categories:
- n<1K
--- |
FelixdoingAI/IP2P-200 | 2023-10-03T08:07:19.000Z | [
"region:us"
] | FelixdoingAI | null | null | null | 0 | 18 | ---
dataset_info:
features:
- name: original_prompt
dtype: string
- name: original_image
dtype: image
- name: edit_prompt
dtype: string
- name: edited_prompt
dtype: string
- name: edited_image
dtype: image
splits:
- name: train
num_bytes: 17732714.0
num_examples: 200
download_size: 17730243
dataset_size: 17732714.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "instructpix2pix-clip-filtered200-samples"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Trelis/openassistant-llama-style | 2023-10-04T16:23:13.000Z | [
"size_categories:1K<n<10k",
"language:en",
"language:es",
"language:ru",
"language:de",
"language:pl",
"language:th",
"language:vi",
"language:sv",
"language:bn",
"language:da",
"language:he",
"language:it",
"language:fa",
"language:sk",
"language:id",
"language:nb",
"language:el",... | Trelis | null | null | null | 1 | 18 | ---
license: apache-2.0
language:
- en
- es
- ru
- de
- pl
- th
- vi
- sv
- bn
- da
- he
- it
- fa
- sk
- id
- nb
- el
- nl
- hu
- eu
- zh
- eo
- ja
- ca
- cs
- bg
- fi
- pt
- tr
- ro
- ar
- uk
- gl
- fr
- ko
tags:
- human-feedback
- llama-2
size_categories:
- 1K<n<10k
pretty_name: Filtered OpenAssistant Conversations
---
# Chat Fine-tuning Dataset - Llama 2 Style
This dataset allows for fine-tuning chat models using [INST] AND [/INST] as the beginning and end of sequence tokens.
Preparation:
1. The dataset is cloned from [TimDettmers](https://huggingface.co/datasets/timdettmers/openassistant-guanaco), which itself is a subset of the Open Assistant dataset, which you can find [here](https://huggingface.co/datasets/OpenAssistant/oasst1/tree/main). This subset of the data only contains the highest-rated paths in the conversation tree, with a total of 9,846 samples.
1. The dataset was then filtered to:
- replace instances of '### Human:' with '[INST]'
- replace instances of '### Assistant:' with '</s><s> [/INST]' (to encourage the model to emit </s> when finished a response)
- if a row of data ends with an assistant response, then [INST] was additionally added to the end of that row of data.
Details of the root dataset follow, copied from that repo:
# OpenAssistant Conversations Dataset (OASST1)
## Dataset Description
- **Homepage:** https://www.open-assistant.io/
- **Repository:** https://github.com/LAION-AI/Open-Assistant
- **Paper:** https://arxiv.org/abs/2304.07327
### Dataset Summary
In an effort to democratize research on large-scale alignment, we release OpenAssistant
Conversations (OASST1), a human-generated, human-annotated assistant-style conversation
corpus consisting of 161,443 messages in 35 different languages, annotated with 461,292
quality ratings, resulting in over 10,000 fully annotated conversation trees. The corpus
is a product of a worldwide crowd-sourcing effort involving over 13,500 volunteers.
Please refer to our [paper](https://arxiv.org/abs/2304.07327) for further details.
### Dataset Structure
This dataset contains message trees. Each message tree has an initial prompt message as the root node,
which can have multiple child messages as replies, and these child messages can have multiple replies.
All messages have a role property: this can either be "assistant" or "prompter". The roles in
conversation threads from prompt to leaf node strictly alternate between "prompter" and "assistant".
This version of the dataset contains data collected on the [open-assistant.io](https://open-assistant.io/) website until April 12 2023.
### JSON Example: Message
For readability, the following JSON examples are shown formatted with indentation on multiple lines.
Objects are stored without indentation (on single lines) in the actual jsonl files.
```json
{
"message_id": "218440fd-5317-4355-91dc-d001416df62b",
"parent_id": "13592dfb-a6f9-4748-a92c-32b34e239bb4",
"user_id": "8e95461f-5e94-4d8b-a2fb-d4717ce973e4",
"text": "It was the winter of 2035, and artificial intelligence (..)",
"role": "assistant",
"lang": "en",
"review_count": 3,
"review_result": true,
"deleted": false,
"rank": 0,
"synthetic": true,
"model_name": "oasst-sft-0_3000,max_new_tokens=400 (..)",
"labels": {
"spam": { "value": 0.0, "count": 3 },
"lang_mismatch": { "value": 0.0, "count": 3 },
"pii": { "value": 0.0, "count": 3 },
"not_appropriate": { "value": 0.0, "count": 3 },
"hate_speech": { "value": 0.0, "count": 3 },
"sexual_content": { "value": 0.0, "count": 3 },
"quality": { "value": 0.416, "count": 3 },
"toxicity": { "value": 0.16, "count": 3 },
"humor": { "value": 0.0, "count": 3 },
"creativity": { "value": 0.33, "count": 3 },
"violence": { "value": 0.16, "count": 3 }
}
}
```
### JSON Example: Conversation Tree
For readability, only a subset of the message properties is shown here.
```json
{
"message_tree_id": "14fbb664-a620-45ce-bee4-7c519b16a793",
"tree_state": "ready_for_export",
"prompt": {
"message_id": "14fbb664-a620-45ce-bee4-7c519b16a793",
"text": "Why can't we divide by 0? (..)",
"role": "prompter",
"lang": "en",
"replies": [
{
"message_id": "894d30b6-56b4-4605-a504-89dd15d4d1c8",
"text": "The reason we cannot divide by zero is because (..)",
"role": "assistant",
"lang": "en",
"replies": [
// ...
]
},
{
"message_id": "84d0913b-0fd9-4508-8ef5-205626a7039d",
"text": "The reason that the result of a division by zero is (..)",
"role": "assistant",
"lang": "en",
"replies": [
{
"message_id": "3352725e-f424-4e3b-a627-b6db831bdbaa",
"text": "Math is confusing. Like those weird Irrational (..)",
"role": "prompter",
"lang": "en",
"replies": [
{
"message_id": "f46207ca-3149-46e9-a466-9163d4ce499c",
"text": "Irrational numbers are simply numbers (..)",
"role": "assistant",
"lang": "en",
"replies": []
},
// ...
]
}
]
}
]
}
}
```
Please refer to [oasst-data](https://github.com/LAION-AI/Open-Assistant/tree/main/oasst-data) for
details about the data structure and Python code to read and write jsonl files containing oasst data objects.
If you would like to explore the dataset yourself you can find a
[`getting-started`](https://github.com/LAION-AI/Open-Assistant/blob/main/notebooks/openassistant-oasst1/getting-started.ipynb)
notebook in the `notebooks/openassistant-oasst1` folder of the [LAION-AI/Open-Assistant](https://github.com/LAION-AI/Open-Assistant)
github repository.
## Main Dataset Files
Conversation data is provided either as nested messages in trees (extension `.trees.jsonl.gz`)
or as a flat list (table) of messages (extension `.messages.jsonl.gz`).
### Ready For Export Trees
```
2023-04-12_oasst_ready.trees.jsonl.gz 10,364 trees with 88,838 total messages
2023-04-12_oasst_ready.messages.jsonl.gz 88,838 messages
```
Trees in `ready_for_export` state without spam and deleted messages including message labels.
The oasst_ready-trees file usually is sufficient for supervised fine-tuning (SFT) & reward model (RM) training.
### All Trees
```
2023-04-12_oasst_all.trees.jsonl.gz 66,497 trees with 161,443 total messages
2023-04-12_oasst_all.messages.jsonl.gz 161,443 messages
```
All trees, including those in states `prompt_lottery_waiting` (trees that consist of only one message, namely the initial prompt),
`aborted_low_grade` (trees that stopped growing because the messages had low quality), and `halted_by_moderator`.
### Supplemental Exports: Spam & Prompts
```
2023-04-12_oasst_spam.messages.jsonl.gz
```
These are messages which were deleted or have a negative review result (`"review_result": false`).
Besides low quality, a frequent reason for message deletion is a wrong language tag.
```
2023-04-12_oasst_prompts.messages.jsonl.gz
```
These are all the kept initial prompt messages with positive review result (no spam) of trees in `ready_for_export` or `prompt_lottery_waiting` state.
### Using the Huggingface Datasets
While HF datasets is ideal for tabular datasets, it is not a natural fit for nested data structures like the OpenAssistant conversation trees.
Nevertheless, we make all messages which can also be found in the file `2023-04-12_oasst_ready.trees.jsonl.gz` available in parquet as train/validation splits.
These are directly loadable by [Huggingface Datasets](https://pypi.org/project/datasets/).
To load the oasst1 train & validation splits use:
```python
from datasets import load_dataset
ds = load_dataset("OpenAssistant/oasst1")
train = ds['train'] # len(train)=84437 (95%)
val = ds['validation'] # len(val)=4401 (5%)
```
The messages appear in depth-first order of the message trees.
Full conversation trees can be reconstructed from the flat messages table by using the `parent_id`
and `message_id` properties to identify the parent-child relationship of messages. The `message_tree_id`
and `tree_state` properties (only present in flat messages files) can be used to find all messages of a message tree or to select trees by their state.
### Languages
OpenAssistant Conversations incorporates 35 different languages with a distribution of messages as follows:
**Languages with over 1000 messages**
- English: 71956
- Spanish: 43061
- Russian: 9089
- German: 5279
- Chinese: 4962
- French: 4251
- Thai: 3042
- Portuguese (Brazil): 2969
- Catalan: 2260
- Korean: 1553
- Ukrainian: 1352
- Italian: 1320
- Japanese: 1018
<details>
<summary><b>Languages with under 1000 messages</b></summary>
<ul>
<li>Vietnamese: 952</li>
<li>Basque: 947</li>
<li>Polish: 886</li>
<li>Hungarian: 811</li>
<li>Arabic: 666</li>
<li>Dutch: 628</li>
<li>Swedish: 512</li>
<li>Turkish: 454</li>
<li>Finnish: 386</li>
<li>Czech: 372</li>
<li>Danish: 358</li>
<li>Galician: 339</li>
<li>Hebrew: 255</li>
<li>Romanian: 200</li>
<li>Norwegian Bokmål: 133</li>
<li>Indonesian: 115</li>
<li>Bulgarian: 95</li>
<li>Bengali: 82</li>
<li>Persian: 72</li>
<li>Greek: 66</li>
<li>Esperanto: 59</li>
<li>Slovak: 19</li>
</ul>
</details>
## Contact
- Discord [Open Assistant Discord Server](https://ykilcher.com/open-assistant-discord)
- GitHub: [LAION-AI/Open-Assistant](https://github.com/LAION-AI/Open-Assistant)
- E-Mail: [open-assistant@laion.ai](mailto:open-assistant@laion.ai) |
autoevaluate/autoeval-eval-xsum-default-e3e096-60495145410 | 2023-10-04T17:19:17.000Z | [
"autotrain",
"evaluation",
"region:us"
] | autoevaluate | null | null | null | 0 | 18 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- xsum
eval_info:
task: summarization
model: google/pegasus-xsum
metrics: ['bertscore']
dataset_name: xsum
dataset_config: default
dataset_split: test
col_mapping:
text: document
target: summary
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: google/pegasus-xsum
* Dataset: xsum
* Config: default
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@zuzannad1](https://huggingface.co/zuzannad1) for evaluating this model. |
renumics/speech_commands-ast-finetuned-results | 2023-10-09T09:18:38.000Z | [
"region:us"
] | renumics | null | null | null | 0 | 18 | ---
dataset_info:
config_name: v0.01
features:
- name: probability
dtype: float64
- name: prediction
dtype:
class_label:
names:
'0': 'yes'
'1': 'no'
'2': up
'3': down
'4': left
'5': right
'6': 'on'
'7': 'off'
'8': stop
'9': go
'10': zero
'11': one
'12': two
'13': three
'14': four
'15': five
'16': six
'17': seven
'18': eight
'19': nine
'20': bed
'21': bird
'22': cat
'23': dog
'24': happy
'25': house
'26': marvin
'27': sheila
'28': tree
'29': wow
'30': _silence_
- name: embedding
sequence: float32
- name: entropy
dtype: float64
splits:
- name: train
num_bytes: 1839348
num_examples: 51093
- name: validation
num_bytes: 244764
num_examples: 6799
- name: test
num_bytes: 110916
num_examples: 3081
download_size: 0
dataset_size: 2195028
configs:
- config_name: v0.01
data_files:
- split: train
path: v0.01/train-*
- split: validation
path: v0.01/validation-*
- split: test
path: v0.01/test-*
---
# Dataset Card for "speech_commands-ast-finetuned-results"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
andreabac3/hellaswag_ita | 2023-10-06T07:37:11.000Z | [
"region:us"
] | andreabac3 | null | null | null | 0 | 18 | ---
dataset_info:
config_name: hellaswag_ita
features:
- name: ind
dtype: int32
- name: activity_label
dtype: string
- name: ctx_a
dtype: string
- name: ctx_b
dtype: string
- name: ctx
dtype: string
- name: endings
dtype: string
- name: source_id
dtype: string
- name: split
dtype: string
- name: split_type
dtype: string
- name: label
dtype: string
- name: translated_ctx
dtype: string
splits:
- name: test
num_bytes: 8385385
num_examples: 10003
- name: validation
num_bytes: 8489330
num_examples: 10042
download_size: 9333456
dataset_size: 16874715
configs:
- config_name: hellaswag_ita
data_files:
- split: test
path: hellaswag_ita/test-*
- split: validation
path: hellaswag_ita/validation-*
---
# Dataset Card for "hellaswag_ita"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
McSpicyWithMilo/infographic-instructions | 2023-10-08T10:38:36.000Z | [
"region:us"
] | McSpicyWithMilo | null | null | null | 0 | 18 | Entry not found |
datacommons_factcheck | 2023-06-01T14:59:47.000Z | [
"task_categories:text-classification",
"task_ids:fact-checking",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"size_categories:n<1K",
"source_datasets:original",
"language:en",
"license:cc-by-nc-4.0",
"region:us"
] | null | A dataset of fact checked claims by news media maintained by datacommons.org | @InProceedings{huggingface:dataset,
title = {Data Commons 2019 Fact Checks},
authors={datacommons.org},
year={2019}
} | null | 2 | 17 | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- en
license:
- cc-by-nc-4.0
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
- n<1K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- fact-checking
paperswithcode_id: null
pretty_name: DataCommons Fact Checked claims
dataset_info:
- config_name: fctchk_politifact_wapo
features:
- name: reviewer_name
dtype: string
- name: claim_text
dtype: string
- name: review_date
dtype: string
- name: review_url
dtype: string
- name: review_rating
dtype: string
- name: claim_author_name
dtype: string
- name: claim_date
dtype: string
splits:
- name: train
num_bytes: 1772321
num_examples: 5632
download_size: 671896
dataset_size: 1772321
- config_name: weekly_standard
features:
- name: reviewer_name
dtype: string
- name: claim_text
dtype: string
- name: review_date
dtype: string
- name: review_url
dtype: string
- name: review_rating
dtype: string
- name: claim_author_name
dtype: string
- name: claim_date
dtype: string
splits:
- name: train
num_bytes: 35061
num_examples: 132
download_size: 671896
dataset_size: 35061
config_names:
- fctchk_politifact_wapo
- weekly_standard
---
# Dataset Card for DataCommons Fact Checked claims
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Data Commons fact checking FAQ](https://datacommons.org/factcheck/faq)
### Dataset Summary
A dataset of fact checked claims by news media maintained by [datacommons.org](https://datacommons.org/) containing the claim, author, and judgments, as well as the URL of the full explanation by the original fact-checker.
The fact checking is done by [FactCheck.org](https://www.factcheck.org/), [PolitiFact](https://www.politifact.com/), and [The Washington Post](https://www.washingtonpost.com/).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
The data is in English (`en`).
## Dataset Structure
### Data Instances
An example of fact checking instance looks as follows:
```
{'claim_author_name': 'Facebook posts',
'claim_date': '2019-01-01',
'claim_text': 'Quotes Michelle Obama as saying, "White folks are what’s wrong with America."',
'review_date': '2019-01-03',
'review_rating': 'Pants on Fire',
'review_url': 'https://www.politifact.com/facebook-fact-checks/statements/2019/jan/03/facebook-posts/did-michelle-obama-once-say-white-folks-are-whats-/',
'reviewer_name': 'PolitiFact'}
```
### Data Fields
A data instance has the following fields:
- `review_date`: the day the fact checking report was posted. Missing values are replaced with empty strings
- `review_url`: URL for the full fact checking report
- `reviewer_name`: the name of the fact checking service.
- `claim_text`: the full text of the claim being reviewed.
- `claim_author_name`: the author of the claim being reviewed. Missing values are replaced with empty strings
- `claim_date` the date of the claim. Missing values are replaced with empty strings
- `review_rating`: the judgments of the fact checker (under `alternateName`, names vary by fact checker)
### Data Splits
No splits are provided. There are a total of 5632 claims fact-checked.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
The fact checking is done by [FactCheck.org](https://www.factcheck.org/), [PolitiFact](https://www.politifact.com/), [The Washington Post](https://www.washingtonpost.com/), and [The Weekly Standard](https://www.weeklystandard.com/).
- [FactCheck.org](https://www.factcheck.org/) self describes as "a nonpartisan, nonprofit 'consumer advocate' for voters that aims to reduce the level of deception and confusion in U.S. politics." It was founded by journalists Kathleen Hall Jamieson and Brooks Jackson and is currently directed by Eugene Kiely.
- [PolitiFact](https://www.politifact.com/) describe their ethics as "seeking to present the true facts, unaffected by agenda or biases, [with] journalists setting their own opinions aside." It was started in August 2007 by Times Washington Bureau Chief Bill Adair. The organization was acquired in February 2018 by the Poynter Institute, a non-profit journalism education and news media research center that also owns the Tampa Bay Times.
- [The Washington Post](https://www.washingtonpost.com/) is a newspaper considered to be near the center of the American political spectrum. In 2013 Amazon.com founder Jeff Bezos bought the newspaper and affiliated publications.
The original data source also contains 132 items reviewed by [The Weekly Standard](https://www.weeklystandard.com/), which was a neo-conservative American newspaper. IT is the most politically loaded source of the group, which was originally a vocal creitic of the activity of fact-checking, and has historically taken stances [close to the American right](https://en.wikipedia.org/wiki/The_Weekly_Standard#Support_of_the_invasion_of_Iraq). It also had to admit responsibility for baseless accusations against a well known author in a public [libel case](https://en.wikipedia.org/wiki/The_Weekly_Standard#Libel_case). The fact checked items from this source can be found in the `weekly_standard` configuration but should be used only with full understanding of this context.
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
See section above describing the [fact checking organizations](#who-are-the-annotators?).
[More Information Needed]
### Other Known Limitations
Dataset provided for research purposes only. Please check dataset license for additional information.
## Additional Information
### Dataset Curators
This fact checking dataset is maintained by [datacommons.org](https://datacommons.org/), a Google initiative.
### Licensing Information
All fact checked items are released under a `CC-BY-NC-4.0` License.
### Citation Information
Data Commons 2020, Fact Checks, electronic dataset, Data Commons, viewed 16 Dec 2020, <https://datacommons.org>.
### Contributions
Thanks to [@yjernite](https://github.com/yjernite) for adding this dataset. |
pec | 2023-06-01T14:59:50.000Z | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_categories:text-retrieval",
"task_ids:dialogue-modeling",
"task_ids:utterance-retrieval",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:orig... | null | \
A dataset of around 350K persona-based empathetic conversations. Each speaker is associated with a persona, which comprises multiple persona sentences. The response of each conversation is empathetic. | \
@inproceedings{zhong2020towards,
title = "Towards Persona-Based Empathetic Conversational Models",
author = "Zhong, Peixiang and
Zhang, Chen and
Wang, Hao and
Liu, Yong and
Miao, Chunyan",
booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
year = "2020",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.emnlp-main.531",
pages = "6556--6566"} | null | 3 | 17 | ---
annotations_creators:
- found
language_creators:
- found
language:
- en
license:
- gpl-3.0
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- text-generation
- fill-mask
- text-retrieval
task_ids:
- dialogue-modeling
- utterance-retrieval
paperswithcode_id: pec
pretty_name: Persona-Based Empathetic Conversational
dataset_info:
- config_name: happy
features:
- name: personas
sequence: string
- name: context
sequence: string
- name: context_speakers
sequence: string
- name: response
dtype: string
- name: response_speaker
dtype: string
splits:
- name: train
num_bytes: 643196978
num_examples: 157195
- name: test
num_bytes: 92003042
num_examples: 22730
- name: validation
num_bytes: 81132088
num_examples: 19829
download_size: 252434681
dataset_size: 816332108
- config_name: offmychest
features:
- name: personas
sequence: string
- name: context
sequence: string
- name: context_speakers
sequence: string
- name: response
dtype: string
- name: response_speaker
dtype: string
splits:
- name: train
num_bytes: 518616402
num_examples: 123968
- name: test
num_bytes: 64173390
num_examples: 15324
- name: validation
num_bytes: 66675909
num_examples: 16004
download_size: 252434681
dataset_size: 649465701
- config_name: all
features:
- name: personas
sequence: string
- name: context
sequence: string
- name: context_speakers
sequence: string
- name: response
dtype: string
- name: response_speaker
dtype: string
splits:
- name: train
num_bytes: 1162655628
num_examples: 281163
- name: test
num_bytes: 156310498
num_examples: 38054
- name: validation
num_bytes: 147940164
num_examples: 35833
download_size: 252434681
dataset_size: 1466906290
config_names:
- all
- happy
- offmychest
---
# Dataset Card for PEC
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** [PEC repository](https://github.com/zhongpeixiang/PEC)
- **Paper:** [Towards Persona-Based Empathetic Conversational Models](https://www.aclweb.org/anthology/2020.emnlp-main.531/)
- **Point of Contact:** [Peixiang Zhong](mailto:zhongpeixiang@gmail.com)
### Dataset Summary
The PEC dataset is an English-language dataset of open-domain conversations gathered from two subreddits on Reddit, i.e., happy and offmychest. PEC has around 350K persona-based empathetic conversations. Each utterance is associated with a speaker, and each speaker has a persona of multiple persona sentences. The conversations in PEC are more empathetic than casual conversations. The conversations in the happy domain are mostly positive, whereas the conversations in the offmychest domain are mostly negative.
### Supported Tasks and Leaderboards
- `dialogue-modeling`, `utterance-retrieval`: this dataset can be used to train a generative or retrieval-based conversational model.
### Languages
English
## Dataset Structure
### Data Instances
A typical data example comprises a list of context utterances, a list of context speakers, a response to the context, the response speaker and the persona of the response speaker.
An example from PEC looks as follows:
```
{'context': ['found out this morning i got a job promotion ! ! !'],
'context_speakers': ['HeWentToJared91'],
'personas': [
"i ca n't stand working in the ugli .",
'i ’ve always liked my eyes except for the fact that they ca n’t shoot lasers',
'i feel really bad about myself as a person right now , and i could really use a hand .',
'i drank a coffee , and it just made me feel even more exhausted .',
'i want a natsuki t shirt',
"i 've dealt with depression in the past .",
'i love red dead 2'],
'response': "you look like a nice person ! we 're proud of you , and i bet you earned that promotion !",
'response_speaker': 'tylock'}
```
### Data Fields
- `context`: a list of strings, each string denotes a context utterance.
- `context_speakers`: a list of strings, each string denotes a speaker.
- `response`: a string denoting the response to the `context`.
- `response_speaker`: a string denoting the speaker of `response`.
- `personas`: a list of strings, each string denotes a persona sentence of `response_speaker`.
### Data Splits
The data is split into a training, validation and test set for each of the three domains. Note that the *all* domain is the concatenation of the *happy* and *offmychest* domains.
| domain | train | validation | test |
|------------|-------:|-----------:|------:|
| happy | 157195 | 19829 | 22730 |
| offmychest | 123968 | 16004 | 15324 |
| all | 281163 | 35833 | 38054 |
## Dataset Creation
### Curation Rationale
PEC was built to provide a testbed for machines to learn persona-based empathetic responding. In our empirical analysis, we found that different personas have different styles of empathetic responding. This dataset can also be used to investigate the link between persona and empathy in human conversations. According to our human assessment, the conversations on the happy and offmychest subreddits are significantly more empathetic than casual conversations.
### Source Data
#### Initial Data Collection and Normalization
The data was obtained via the [pushshift API](https://pushshift.io/using-bigquery-with-reddit-data/) via Google BigQuery.
#### Who are the source language producers?
The language producers are users of the [r/happy](https://www.reddit.com/r/happy/), and [r/offmychest](https://www.reddit.com/r/offmychest/) subreddits between 2012 and 2020. No further demographic information was available from the data source.
### Annotations
#### Annotation process
The dataset does not contain any additional annotations.
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
The dataset includes the speaker IDs of users on *happy* and *offmychest* subreddits.
## Considerations for Using the Data
### Social Impact of Dataset
The purpose of this dataset is to help develop more personalised and empathetic conversational systems, which is an important milestone towards truly human-like conversational agents.
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
A small portion of the dataset has the issues of sexism, hate, and harassment. The persona sentences are noisy.
## Additional Information
### Dataset Curators
The dataset was initially created by Peixiang Zhong, Chen Zhang, Hao Wang, Yong Liu, and Chunyan Miao, jointly done at Nanyang Technological University and Alibaba Group.
### Licensing Information
The licensing status of the dataset hinges on the legal status of the [Pushshift.io](https://files.pushshift.io/reddit/) data which is unclear.
### Citation Information
```
@inproceedings{zhong-etal-2020-towards,
title = "Towards Persona-Based Empathetic Conversational Models",
author = "Zhong, Peixiang and
Zhang, Chen and
Wang, Hao and
Liu, Yong and
Miao, Chunyan",
booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
year = "2020",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.emnlp-main.531",
pages = "6556--6566"
}
```
### Contributions
Thanks to [@zhongpeixiang](https://github.com/zhongpeixiang) for adding this dataset. |
KETI-AIR/nikl | 2021-06-08T06:42:34.000Z | [
"region:us"
] | KETI-AIR | Description is **formatted** as markdown.
It should also contain any processing which has been applied (if any),
(e.g. corrupted example skipped, images cropped,...): | null | 0 | 17 | <!--
Copyright 2021 san kim
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->
# National Institute of Korean Language(NIKL) Corpus
| |
SetFit/TREC-QC | 2022-01-15T22:42:56.000Z | [
"region:us"
] | SetFit | null | null | null | 0 | 17 | # TREC Question Classification
Question classification in coarse and fine-grained categories.
Source:
[Experimental Data for Question Classification](https://cogcomp.seas.upenn.edu/Data/QA/QC/)
Xin Li, Dan Roth, Learning Question Classifiers. COLING'02, Aug., 2002. |
SocialGrep/the-2022-trucker-strike-on-reddit | 2022-07-01T18:00:49.000Z | [
"annotations_creators:lexyr",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"region:us"
] | SocialGrep | null | null | null | 1 | 17 | ---
annotations_creators:
- lexyr
language_creators:
- crowdsourced
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 1M<n<10M
source_datasets:
- original
paperswithcode_id: null
---
# Dataset Card for the-2022-trucker-strike-on-reddit
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://socialgrep.com/datasets](https://socialgrep.com/datasets/the-2022-trucker-strike-on-reddit?utm_source=huggingface&utm_medium=link&utm_campaign=the2022truckerstrikeonreddit)
- **Point of Contact:** [Website](https://socialgrep.com/contact?utm_source=huggingface&utm_medium=link&utm_campaign=the2022truckerstrikeonreddit)
### Dataset Summary
This corpus contains all the comments under the /r/Ottawa convoy megathreads.
Comments are annotated with their score.
### Languages
Mainly English.
## Dataset Structure
### Data Instances
A data point is a Reddit comment.
### Data Fields
- 'type': the type of the data point. Can be 'post' or 'comment'.
- 'id': the base-36 Reddit ID of the data point. Unique when combined with type.
- 'subreddit.id': the base-36 Reddit ID of the data point's host subreddit. Unique.
- 'subreddit.name': the human-readable name of the data point's host subreddit.
- 'subreddit.nsfw': a boolean marking the data point's host subreddit as NSFW or not.
- 'created_utc': a UTC timestamp for the data point.
- 'permalink': a reference link to the data point on Reddit.
- 'score': score of the data point on Reddit.
- 'sentiment': the evaluated sentiment of the data point, if any.
- 'body': the text of the data point.
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
CC-BY v4.0
### Contributions
[Needs More Information] |
Tevatron/scifact | 2021-09-13T23:32:59.000Z | [
"region:us"
] | Tevatron | null | @inproceedings{Wadden2020FactOF,
title={Fact or Fiction: Verifying Scientific Claims},
author={David Wadden and Shanchuan Lin and Kyle Lo and Lucy Lu Wang and Madeleine van Zuylen and Arman Cohan and Hannaneh Hajishirzi},
booktitle={EMNLP},
year={2020},
} | null | 0 | 17 | Entry not found |
ctu-aic/csfever_nli | 2022-02-22T11:13:35.000Z | [
"region:us"
] | ctu-aic | CsfeverNLI is a NLI version of the Czech Csfever dataset | todo | null | 1 | 17 | |
jmamou/augmented-glue-sst2 | 2022-07-17T12:25:34.000Z | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"annotations_creators:machine-generated",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:en-US",
"license:unknown",
"region:us"
] | jmamou | null | null | null | 0 | 17 | ---
annotations_creators:
- machine-generated
extended:
- original
language_creators:
- machine-generated
language:
- en-US
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- sentiment-classification
---
# Dataset Card for Augmented-GLUE-SST2
Automatically augmented data from train split of SST-2 dataset using conditional text generation approach.
Code used to generate this file will be soon available at https://github.com/IntelLabs/nlp-architect.
|
mozilla-foundation/common_voice_4_0 | 2023-07-29T16:00:01.000Z | [
"task_categories:automatic-speech-recognition",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:multilingual",
"source_datasets:extended|common_voice",
"license:cc0-1.0",
"arxiv:1912.06670",
"region:us"
] | mozilla-foundation | null | @inproceedings{commonvoice:2020,
author = {Ardila, R. and Branson, M. and Davis, K. and Henretty, M. and Kohler, M. and Meyer, J. and Morais, R. and Saunders, L. and Tyers, F. M. and Weber, G.},
title = {Common Voice: A Massively-Multilingual Speech Corpus},
booktitle = {Proceedings of the 12th Conference on Language Resources and Evaluation (LREC 2020)},
pages = {4211--4215},
year = 2020
} | null | 1 | 17 | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
license:
- cc0-1.0
multilinguality:
- multilingual
size_categories:
ab:
- n<1K
ar:
- 10K<n<100K
br:
- 10K<n<100K
ca:
- 100K<n<1M
cnh:
- 1K<n<10K
cv:
- 1K<n<10K
cy:
- 10K<n<100K
de:
- 100K<n<1M
dv:
- 1K<n<10K
en:
- 1M<n<10M
eo:
- 10K<n<100K
es:
- 100K<n<1M
et:
- 1K<n<10K
eu:
- 10K<n<100K
fa:
- 100K<n<1M
fr:
- 100K<n<1M
ga-IE:
- 1K<n<10K
ia:
- 1K<n<10K
id:
- 1K<n<10K
it:
- 10K<n<100K
ja:
- 1K<n<10K
kab:
- 100K<n<1M
ky:
- 10K<n<100K
lv:
- 1K<n<10K
mn:
- 1K<n<10K
nl:
- 10K<n<100K
pt:
- 10K<n<100K
rm-sursilv:
- n<1K
ru:
- 10K<n<100K
rw:
- 10K<n<100K
sah:
- 1K<n<10K
sl:
- 1K<n<10K
sv-SE:
- 1K<n<10K
ta:
- 1K<n<10K
tr:
- 10K<n<100K
tt:
- 10K<n<100K
vot:
- n<1K
zh-CN:
- 10K<n<100K
zh-HK:
- n<1K
zh-TW:
- 10K<n<100K
source_datasets:
- extended|common_voice
paperswithcode_id: common-voice
pretty_name: Common Voice Corpus 4
language_bcp47:
- ab
- ar
- br
- ca
- cnh
- cv
- cy
- de
- dv
- en
- eo
- es
- et
- eu
- fa
- fr
- ga-IE
- ia
- id
- it
- ja
- kab
- ky
- lv
- mn
- nl
- pt
- rm-sursilv
- ru
- rw
- sah
- sl
- sv-SE
- ta
- tr
- tt
- vot
- zh-CN
- zh-HK
- zh-TW
extra_gated_prompt: By clicking on “Access repository” below, you also agree to not
attempt to determine the identity of speakers in the Common Voice dataset.
task_categories:
- automatic-speech-recognition
---
# Dataset Card for Common Voice Corpus 4
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://commonvoice.mozilla.org/en/datasets
- **Repository:** https://github.com/common-voice/common-voice
- **Paper:** https://arxiv.org/abs/1912.06670
- **Leaderboard:** https://paperswithcode.com/dataset/common-voice
- **Point of Contact:** [Anton Lozhkov](mailto:anton@huggingface.co)
### Dataset Summary
The Common Voice dataset consists of a unique MP3 and corresponding text file.
Many of the 4257 recorded hours in the dataset also include demographic metadata like age, sex, and accent
that can help improve the accuracy of speech recognition engines.
The dataset currently consists of 3401 validated hours in 40 languages, but more voices and languages are always added.
Take a look at the [Languages](https://commonvoice.mozilla.org/en/languages) page to request a language or start contributing.
### Supported Tasks and Leaderboards
The results for models trained on the Common Voice datasets are available via the
[🤗 Speech Bench](https://huggingface.co/spaces/huggingface/hf-speech-bench)
### Languages
```
Abkhaz, Arabic, Basque, Breton, Catalan, Chinese (China), Chinese (Hong Kong), Chinese (Taiwan), Chuvash, Dhivehi, Dutch, English, Esperanto, Estonian, French, German, Hakha Chin, Indonesian, Interlingua, Irish, Italian, Japanese, Kabyle, Kinyarwanda, Kyrgyz, Latvian, Mongolian, Persian, Portuguese, Romansh Sursilvan, Russian, Sakha, Slovenian, Spanish, Swedish, Tamil, Tatar, Turkish, Votic, Welsh
```
## Dataset Structure
### Data Instances
A typical data point comprises the `path` to the audio file and its `sentence`.
Additional fields include `accent`, `age`, `client_id`, `up_votes`, `down_votes`, `gender`, `locale` and `segment`.
```python
{
'client_id': 'd59478fbc1ee646a28a3c652a119379939123784d99131b865a89f8b21c81f69276c48bd574b81267d9d1a77b83b43e6d475a6cfc79c232ddbca946ae9c7afc5',
'path': 'et/clips/common_voice_et_18318995.mp3',
'audio': {
'path': 'et/clips/common_voice_et_18318995.mp3',
'array': array([-0.00048828, -0.00018311, -0.00137329, ..., 0.00079346, 0.00091553, 0.00085449], dtype=float32),
'sampling_rate': 48000
},
'sentence': 'Tasub kokku saada inimestega, keda tunned juba ammust ajast saati.',
'up_votes': 2,
'down_votes': 0,
'age': 'twenties',
'gender': 'male',
'accent': '',
'locale': 'et',
'segment': ''
}
```
### Data Fields
`client_id` (`string`): An id for which client (voice) made the recording
`path` (`string`): The path to the audio file
`audio` (`dict`): A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`.
`sentence` (`string`): The sentence the user was prompted to speak
`up_votes` (`int64`): How many upvotes the audio file has received from reviewers
`down_votes` (`int64`): How many downvotes the audio file has received from reviewers
`age` (`string`): The age of the speaker (e.g. `teens`, `twenties`, `fifties`)
`gender` (`string`): The gender of the speaker
`accent` (`string`): Accent of the speaker
`locale` (`string`): The locale of the speaker
`segment` (`string`): Usually an empty field
### Data Splits
The speech material has been subdivided into portions for dev, train, test, validated, invalidated, reported and other.
The validated data is data that has been validated with reviewers and received upvotes that the data is of high quality.
The invalidated data is data has been invalidated by reviewers
and received downvotes indicating that the data is of low quality.
The reported data is data that has been reported, for different reasons.
The other data is data that has not yet been reviewed.
The dev, test, train are all data that has been reviewed, deemed of high quality and split into dev, test and train.
## Data Preprocessing Recommended by Hugging Face
The following are data preprocessing steps advised by the Hugging Face team. They are accompanied by an example code snippet that shows how to put them to practice.
Many examples in this dataset have trailing quotations marks, e.g _“the cat sat on the mat.“_. These trailing quotation marks do not change the actual meaning of the sentence, and it is near impossible to infer whether a sentence is a quotation or not a quotation from audio data alone. In these cases, it is advised to strip the quotation marks, leaving: _the cat sat on the mat_.
In addition, the majority of training sentences end in punctuation ( . or ? or ! ), whereas just a small proportion do not. In the dev set, **almost all** sentences end in punctuation. Thus, it is recommended to append a full-stop ( . ) to the end of the small number of training examples that do not end in punctuation.
```python
from datasets import load_dataset
ds = load_dataset("mozilla-foundation/common_voice_4_0", "en", use_auth_token=True)
def prepare_dataset(batch):
"""Function to preprocess the dataset with the .map method"""
transcription = batch["sentence"]
if transcription.startswith('"') and transcription.endswith('"'):
# we can remove trailing quotation marks as they do not affect the transcription
transcription = transcription[1:-1]
if transcription[-1] not in [".", "?", "!"]:
# append a full-stop to sentences that do not end in punctuation
transcription = transcription + "."
batch["sentence"] = transcription
return batch
ds = ds.map(prepare_dataset, desc="preprocess dataset")
```
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset.
## Considerations for Using the Data
### Social Impact of Dataset
The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset.
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
Public Domain, [CC-0](https://creativecommons.org/share-your-work/public-domain/cc0/)
### Citation Information
```
@inproceedings{commonvoice:2020,
author = {Ardila, R. and Branson, M. and Davis, K. and Henretty, M. and Kohler, M. and Meyer, J. and Morais, R. and Saunders, L. and Tyers, F. M. and Weber, G.},
title = {Common Voice: A Massively-Multilingual Speech Corpus},
booktitle = {Proceedings of the 12th Conference on Language Resources and Evaluation (LREC 2020)},
pages = {4211--4215},
year = 2020
}
```
|
mrm8488/AnswerSum | 2022-03-27T19:41:12.000Z | [
"region:us"
] | mrm8488 | null | null | null | 0 | 17 | Entry not found |
mrm8488/ImageNet1K-train | 2022-04-28T11:06:11.000Z | [
"region:us"
] | mrm8488 | null | null | null | 0 | 17 | mapping:
```
n01440764 tench, Tinca tinca
n01443537 goldfish, Carassius auratus
n01484850 great white shark, white shark, man-eater, man-eating shark, Carcharodon carcharias
n01491361 tiger shark, Galeocerdo cuvieri
n01494475 hammerhead, hammerhead shark
n01496331 electric ray, crampfish, numbfish, torpedo
n01498041 stingray
n01514668 cock
n01514859 hen
n01518878 ostrich, Struthio camelus
n01530575 brambling, Fringilla montifringilla
n01531178 goldfinch, Carduelis carduelis
n01532829 house finch, linnet, Carpodacus mexicanus
n01534433 junco, snowbird
n01537544 indigo bunting, indigo finch, indigo bird, Passerina cyanea
n01558993 robin, American robin, Turdus migratorius
n01560419 bulbul
n01580077 jay
n01582220 magpie
n01592084 chickadee
n01601694 water ouzel, dipper
n01608432 kite
n01614925 bald eagle, American eagle, Haliaeetus leucocephalus
n01616318 vulture
n01622779 great grey owl, great gray owl, Strix nebulosa
n01629819 European fire salamander, Salamandra salamandra
n01630670 common newt, Triturus vulgaris
n01631663 eft
n01632458 spotted salamander, Ambystoma maculatum
n01632777 axolotl, mud puppy, Ambystoma mexicanum
n01641577 bullfrog, Rana catesbeiana
n01644373 tree frog, tree-frog
n01644900 tailed frog, bell toad, ribbed toad, tailed toad, Ascaphus trui
n01664065 loggerhead, loggerhead turtle, Caretta caretta
n01665541 leatherback turtle, leatherback, leathery turtle, Dermochelys coriacea
n01667114 mud turtle
n01667778 terrapin
n01669191 box turtle, box tortoise
n01675722 banded gecko
n01677366 common iguana, iguana, Iguana iguana
n01682714 American chameleon, anole, Anolis carolinensis
n01685808 whiptail, whiptail lizard
n01687978 agama
n01688243 frilled lizard, Chlamydosaurus kingi
n01689811 alligator lizard
n01692333 Gila monster, Heloderma suspectum
n01693334 green lizard, Lacerta viridis
n01694178 African chameleon, Chamaeleo chamaeleon
n01695060 Komodo dragon, Komodo lizard, dragon lizard, giant lizard, Varanus komodoensis
n01697457 African crocodile, Nile crocodile, Crocodylus niloticus
n01698640 American alligator, Alligator mississipiensis
n01704323 triceratops
n01728572 thunder snake, worm snake, Carphophis amoenus
n01728920 ringneck snake, ring-necked snake, ring snake
n01729322 hognose snake, puff adder, sand viper
n01729977 green snake, grass snake
n01734418 king snake, kingsnake
n01735189 garter snake, grass snake
n01737021 water snake
n01739381 vine snake
n01740131 night snake, Hypsiglena torquata
n01742172 boa constrictor, Constrictor constrictor
n01744401 rock python, rock snake, Python sebae
n01748264 Indian cobra, Naja naja
n01749939 green mamba
n01751748 sea snake
n01753488 horned viper, cerastes, sand viper, horned asp, Cerastes cornutus
n01755581 diamondback, diamondback rattlesnake, Crotalus adamanteus
n01756291 sidewinder, horned rattlesnake, Crotalus cerastes
n01768244 trilobite
n01770081 harvestman, daddy longlegs, Phalangium opilio
n01770393 scorpion
n01773157 black and gold garden spider, Argiope aurantia
n01773549 barn spider, Araneus cavaticus
n01773797 garden spider, Aranea diademata
n01774384 black widow, Latrodectus mactans
n01774750 tarantula
n01775062 wolf spider, hunting spider
n01776313 tick
n01784675 centipede
n01795545 black grouse
n01796340 ptarmigan
n01797886 ruffed grouse, partridge, Bonasa umbellus
n01798484 prairie chicken, prairie grouse, prairie fowl
n01806143 peacock
n01806567 quail
n01807496 partridge
n01817953 African grey, African gray, Psittacus erithacus
n01818515 macaw
n01819313 sulphur-crested cockatoo, Kakatoe galerita, Cacatua galerita
n01820546 lorikeet
n01824575 coucal
n01828970 bee eater
n01829413 hornbill
n01833805 hummingbird
n01843065 jacamar
n01843383 toucan
n01847000 drake
n01855032 red-breasted merganser, Mergus serrator
n01855672 goose
n01860187 black swan, Cygnus atratus
n01871265 tusker
n01872401 echidna, spiny anteater, anteater
n01873310 platypus, duckbill, duckbilled platypus, duck-billed platypus, Ornithorhynchus anatinus
n01877812 wallaby, brush kangaroo
n01882714 koala, koala bear, kangaroo bear, native bear, Phascolarctos cinereus
n01883070 wombat
n01910747 jellyfish
n01914609 sea anemone, anemone
n01917289 brain coral
n01924916 flatworm, platyhelminth
n01930112 nematode, nematode worm, roundworm
n01943899 conch
n01944390 snail
n01945685 slug
n01950731 sea slug, nudibranch
n01955084 chiton, coat-of-mail shell, sea cradle, polyplacophore
n01968897 chambered nautilus, pearly nautilus, nautilus
n01978287 Dungeness crab, Cancer magister
n01978455 rock crab, Cancer irroratus
n01980166 fiddler crab
n01981276 king crab, Alaska crab, Alaskan king crab, Alaska king crab, Paralithodes camtschatica
n01983481 American lobster, Northern lobster, Maine lobster, Homarus americanus
n01984695 spiny lobster, langouste, rock lobster, crawfish, crayfish, sea crawfish
n01985128 crayfish, crawfish, crawdad, crawdaddy
n01986214 hermit crab
n01990800 isopod
n02002556 white stork, Ciconia ciconia
n02002724 black stork, Ciconia nigra
n02006656 spoonbill
n02007558 flamingo
n02009229 little blue heron, Egretta caerulea
n02009912 American egret, great white heron, Egretta albus
n02011460 bittern
n02012849 crane
n02013706 limpkin, Aramus pictus
n02017213 European gallinule, Porphyrio porphyrio
n02018207 American coot, marsh hen, mud hen, water hen, Fulica americana
n02018795 bustard
n02025239 ruddy turnstone, Arenaria interpres
n02027492 red-backed sandpiper, dunlin, Erolia alpina
n02028035 redshank, Tringa totanus
n02033041 dowitcher
n02037110 oystercatcher, oyster catcher
n02051845 pelican
n02056570 king penguin, Aptenodytes patagonica
n02058221 albatross, mollymawk
n02066245 grey whale, gray whale, devilfish, Eschrichtius gibbosus, Eschrichtius robustus
n02071294 killer whale, killer, orca, grampus, sea wolf, Orcinus orca
n02074367 dugong, Dugong dugon
n02077923 sea lion
n02085620 Chihuahua
n02085782 Japanese spaniel
n02085936 Maltese dog, Maltese terrier, Maltese
n02086079 Pekinese, Pekingese, Peke
n02086240 Shih-Tzu
n02086646 Blenheim spaniel
n02086910 papillon
n02087046 toy terrier
n02087394 Rhodesian ridgeback
n02088094 Afghan hound, Afghan
n02088238 basset, basset hound
n02088364 beagle
n02088466 bloodhound, sleuthhound
n02088632 bluetick
n02089078 black-and-tan coonhound
n02089867 Walker hound, Walker foxhound
n02089973 English foxhound
n02090379 redbone
n02090622 borzoi, Russian wolfhound
n02090721 Irish wolfhound
n02091032 Italian greyhound
n02091134 whippet
n02091244 Ibizan hound, Ibizan Podenco
n02091467 Norwegian elkhound, elkhound
n02091635 otterhound, otter hound
n02091831 Saluki, gazelle hound
n02092002 Scottish deerhound, deerhound
n02092339 Weimaraner
n02093256 Staffordshire bullterrier, Staffordshire bull terrier
n02093428 American Staffordshire terrier, Staffordshire terrier, American pit bull terrier, pit bull terrier
n02093647 Bedlington terrier
n02093754 Border terrier
n02093859 Kerry blue terrier
n02093991 Irish terrier
n02094114 Norfolk terrier
n02094258 Norwich terrier
n02094433 Yorkshire terrier
n02095314 wire-haired fox terrier
n02095570 Lakeland terrier
n02095889 Sealyham terrier, Sealyham
n02096051 Airedale, Airedale terrier
n02096177 cairn, cairn terrier
n02096294 Australian terrier
n02096437 Dandie Dinmont, Dandie Dinmont terrier
n02096585 Boston bull, Boston terrier
n02097047 miniature schnauzer
n02097130 giant schnauzer
n02097209 standard schnauzer
n02097298 Scotch terrier, Scottish terrier, Scottie
n02097474 Tibetan terrier, chrysanthemum dog
n02097658 silky terrier, Sydney silky
n02098105 soft-coated wheaten terrier
n02098286 West Highland white terrier
n02098413 Lhasa, Lhasa apso
n02099267 flat-coated retriever
n02099429 curly-coated retriever
n02099601 golden retriever
n02099712 Labrador retriever
n02099849 Chesapeake Bay retriever
n02100236 German short-haired pointer
n02100583 vizsla, Hungarian pointer
n02100735 English setter
n02100877 Irish setter, red setter
n02101006 Gordon setter
n02101388 Brittany spaniel
n02101556 clumber, clumber spaniel
n02102040 English springer, English springer spaniel
n02102177 Welsh springer spaniel
n02102318 cocker spaniel, English cocker spaniel, cocker
n02102480 Sussex spaniel
n02102973 Irish water spaniel
n02104029 kuvasz
n02104365 schipperke
n02105056 groenendael
n02105162 malinois
n02105251 briard
n02105412 kelpie
n02105505 komondor
n02105641 Old English sheepdog, bobtail
n02105855 Shetland sheepdog, Shetland sheep dog, Shetland
n02106030 collie
n02106166 Border collie
n02106382 Bouvier des Flandres, Bouviers des Flandres
n02106550 Rottweiler
n02106662 German shepherd, German shepherd dog, German police dog, alsatian
n02107142 Doberman, Doberman pinscher
n02107312 miniature pinscher
n02107574 Greater Swiss Mountain dog
n02107683 Bernese mountain dog
n02107908 Appenzeller
n02108000 EntleBucher
n02108089 boxer
n02108422 bull mastiff
n02108551 Tibetan mastiff
n02108915 French bulldog
n02109047 Great Dane
n02109525 Saint Bernard, St Bernard
n02109961 Eskimo dog, husky
n02110063 malamute, malemute, Alaskan malamute
n02110185 Siberian husky
n02110341 dalmatian, coach dog, carriage dog
n02110627 affenpinscher, monkey pinscher, monkey dog
n02110806 basenji
n02110958 pug, pug-dog
n02111129 Leonberg
n02111277 Newfoundland, Newfoundland dog
n02111500 Great Pyrenees
n02111889 Samoyed, Samoyede
n02112018 Pomeranian
n02112137 chow, chow chow
n02112350 keeshond
n02112706 Brabancon griffon
n02113023 Pembroke, Pembroke Welsh corgi
n02113186 Cardigan, Cardigan Welsh corgi
n02113624 toy poodle
n02113712 miniature poodle
n02113799 standard poodle
n02113978 Mexican hairless
n02114367 timber wolf, grey wolf, gray wolf, Canis lupus
n02114548 white wolf, Arctic wolf, Canis lupus tundrarum
n02114712 red wolf, maned wolf, Canis rufus, Canis niger
n02114855 coyote, prairie wolf, brush wolf, Canis latrans
n02115641 dingo, warrigal, warragal, Canis dingo
n02115913 dhole, Cuon alpinus
n02116738 African hunting dog, hyena dog, Cape hunting dog, Lycaon pictus
n02117135 hyena, hyaena
n02119022 red fox, Vulpes vulpes
n02119789 kit fox, Vulpes macrotis
n02120079 Arctic fox, white fox, Alopex lagopus
n02120505 grey fox, gray fox, Urocyon cinereoargenteus
n02123045 tabby, tabby cat
n02123159 tiger cat
n02123394 Persian cat
n02123597 Siamese cat, Siamese
n02124075 Egyptian cat
n02125311 cougar, puma, catamount, mountain lion, painter, panther, Felis concolor
n02127052 lynx, catamount
n02128385 leopard, Panthera pardus
n02128757 snow leopard, ounce, Panthera uncia
n02128925 jaguar, panther, Panthera onca, Felis onca
n02129165 lion, king of beasts, Panthera leo
n02129604 tiger, Panthera tigris
n02130308 cheetah, chetah, Acinonyx jubatus
n02132136 brown bear, bruin, Ursus arctos
n02133161 American black bear, black bear, Ursus americanus, Euarctos americanus
n02134084 ice bear, polar bear, Ursus Maritimus, Thalarctos maritimus
n02134418 sloth bear, Melursus ursinus, Ursus ursinus
n02137549 mongoose
n02138441 meerkat, mierkat
n02165105 tiger beetle
n02165456 ladybug, ladybeetle, lady beetle, ladybird, ladybird beetle
n02167151 ground beetle, carabid beetle
n02168699 long-horned beetle, longicorn, longicorn beetle
n02169497 leaf beetle, chrysomelid
n02172182 dung beetle
n02174001 rhinoceros beetle
n02177972 weevil
n02190166 fly
n02206856 bee
n02219486 ant, emmet, pismire
n02226429 grasshopper, hopper
n02229544 cricket
n02231487 walking stick, walkingstick, stick insect
n02233338 cockroach, roach
n02236044 mantis, mantid
n02256656 cicada, cicala
n02259212 leafhopper
n02264363 lacewing, lacewing fly
n02268443 dragonfly, darning needle, devil's darning needle, sewing needle, snake feeder, snake doctor, mosquito hawk, skeeter hawk
n02268853 damselfly
n02276258 admiral
n02277742 ringlet, ringlet butterfly
n02279972 monarch, monarch butterfly, milkweed butterfly, Danaus plexippus
n02280649 cabbage butterfly
n02281406 sulphur butterfly, sulfur butterfly
n02281787 lycaenid, lycaenid butterfly
n02317335 starfish, sea star
n02319095 sea urchin
n02321529 sea cucumber, holothurian
n02325366 wood rabbit, cottontail, cottontail rabbit
n02326432 hare
n02328150 Angora, Angora rabbit
n02342885 hamster
n02346627 porcupine, hedgehog
n02356798 fox squirrel, eastern fox squirrel, Sciurus niger
n02361337 marmot
n02363005 beaver
n02364673 guinea pig, Cavia cobaya
n02389026 sorrel
n02391049 zebra
n02395406 hog, pig, grunter, squealer, Sus scrofa
n02396427 wild boar, boar, Sus scrofa
n02397096 warthog
n02398521 hippopotamus, hippo, river horse, Hippopotamus amphibius
n02403003 ox
n02408429 water buffalo, water ox, Asiatic buffalo, Bubalus bubalis
n02410509 bison
n02412080 ram, tup
n02415577 bighorn, bighorn sheep, cimarron, Rocky Mountain bighorn, Rocky Mountain sheep, Ovis canadensis
n02417914 ibex, Capra ibex
n02422106 hartebeest
n02422699 impala, Aepyceros melampus
n02423022 gazelle
n02437312 Arabian camel, dromedary, Camelus dromedarius
n02437616 llama
n02441942 weasel
n02442845 mink
n02443114 polecat, fitch, foulmart, foumart, Mustela putorius
n02443484 black-footed ferret, ferret, Mustela nigripes
n02444819 otter
n02445715 skunk, polecat, wood pussy
n02447366 badger
n02454379 armadillo
n02457408 three-toed sloth, ai, Bradypus tridactylus
n02480495 orangutan, orang, orangutang, Pongo pygmaeus
n02480855 gorilla, Gorilla gorilla
n02481823 chimpanzee, chimp, Pan troglodytes
n02483362 gibbon, Hylobates lar
n02483708 siamang, Hylobates syndactylus, Symphalangus syndactylus
n02484975 guenon, guenon monkey
n02486261 patas, hussar monkey, Erythrocebus patas
n02486410 baboon
n02487347 macaque
n02488291 langur
n02488702 colobus, colobus monkey
n02489166 proboscis monkey, Nasalis larvatus
n02490219 marmoset
n02492035 capuchin, ringtail, Cebus capucinus
n02492660 howler monkey, howler
n02493509 titi, titi monkey
n02493793 spider monkey, Ateles geoffroyi
n02494079 squirrel monkey, Saimiri sciureus
n02497673 Madagascar cat, ring-tailed lemur, Lemur catta
n02500267 indri, indris, Indri indri, Indri brevicaudatus
n02504013 Indian elephant, Elephas maximus
n02504458 African elephant, Loxodonta africana
n02509815 lesser panda, red panda, panda, bear cat, cat bear, Ailurus fulgens
n02510455 giant panda, panda, panda bear, coon bear, Ailuropoda melanoleuca
n02514041 barracouta, snoek
n02526121 eel
n02536864 coho, cohoe, coho salmon, blue jack, silver salmon, Oncorhynchus kisutch
n02606052 rock beauty, Holocanthus tricolor
n02607072 anemone fish
n02640242 sturgeon
n02641379 gar, garfish, garpike, billfish, Lepisosteus osseus
n02643566 lionfish
n02655020 puffer, pufferfish, blowfish, globefish
n02666196 abacus
n02667093 abaya
n02669723 academic gown, academic robe, judge's robe
n02672831 accordion, piano accordion, squeeze box
n02676566 acoustic guitar
n02687172 aircraft carrier, carrier, flattop, attack aircraft carrier
n02690373 airliner
n02692877 airship, dirigible
n02699494 altar
n02701002 ambulance
n02704792 amphibian, amphibious vehicle
n02708093 analog clock
n02727426 apiary, bee house
n02730930 apron
n02747177 ashcan, trash can, garbage can, wastebin, ash bin, ash-bin, ashbin, dustbin, trash barrel, trash bin
n02749479 assault rifle, assault gun
n02769748 backpack, back pack, knapsack, packsack, rucksack, haversack
n02776631 bakery, bakeshop, bakehouse
n02777292 balance beam, beam
n02782093 balloon
n02783161 ballpoint, ballpoint pen, ballpen, Biro
n02786058 Band Aid
n02787622 banjo
n02788148 bannister, banister, balustrade, balusters, handrail
n02790996 barbell
n02791124 barber chair
n02791270 barbershop
n02793495 barn
n02794156 barometer
n02795169 barrel, cask
n02797295 barrow, garden cart, lawn cart, wheelbarrow
n02799071 baseball
n02802426 basketball
n02804414 bassinet
n02804610 bassoon
n02807133 bathing cap, swimming cap
n02808304 bath towel
n02808440 bathtub, bathing tub, bath, tub
n02814533 beach wagon, station wagon, wagon, estate car, beach waggon, station waggon, waggon
n02814860 beacon, lighthouse, beacon light, pharos
n02815834 beaker
n02817516 bearskin, busby, shako
n02823428 beer bottle
n02823750 beer glass
n02825657 bell cote, bell cot
n02834397 bib
n02835271 bicycle-built-for-two, tandem bicycle, tandem
n02837789 bikini, two-piece
n02840245 binder, ring-binder
n02841315 binoculars, field glasses, opera glasses
n02843684 birdhouse
n02859443 boathouse
n02860847 bobsled, bobsleigh, bob
n02865351 bolo tie, bolo, bola tie, bola
n02869837 bonnet, poke bonnet
n02870880 bookcase
n02871525 bookshop, bookstore, bookstall
n02877765 bottlecap
n02879718 bow
n02883205 bow tie, bow-tie, bowtie
n02892201 brass, memorial tablet, plaque
n02892767 brassiere, bra, bandeau
n02894605 breakwater, groin, groyne, mole, bulwark, seawall, jetty
n02895154 breastplate, aegis, egis
n02906734 broom
n02909870 bucket, pail
n02910353 buckle
n02916936 bulletproof vest
n02917067 bullet train, bullet
n02927161 butcher shop, meat market
n02930766 cab, hack, taxi, taxicab
n02939185 caldron, cauldron
n02948072 candle, taper, wax light
n02950826 cannon
n02951358 canoe
n02951585 can opener, tin opener
n02963159 cardigan
n02965783 car mirror
n02966193 carousel, carrousel, merry-go-round, roundabout, whirligig
n02966687 carpenter's kit, tool kit
n02971356 carton
n02974003 car wheel
n02977058 cash machine, cash dispenser, automated teller machine, automatic teller machine, automated teller, automatic teller, ATM
n02978881 cassette
n02979186 cassette player
n02980441 castle
n02981792 catamaran
n02988304 CD player
n02992211 cello, violoncello
n02992529 cellular telephone, cellular phone, cellphone, cell, mobile phone
n02999410 chain
n03000134 chainlink fence
n03000247 chain mail, ring mail, mail, chain armor, chain armour, ring armor, ring armour
n03000684 chain saw, chainsaw
n03014705 chest
n03016953 chiffonier, commode
n03017168 chime, bell, gong
n03018349 china cabinet, china closet
n03026506 Christmas stocking
n03028079 church, church building
n03032252 cinema, movie theater, movie theatre, movie house, picture palace
n03041632 cleaver, meat cleaver, chopper
n03042490 cliff dwelling
n03045698 cloak
n03047690 clog, geta, patten, sabot
n03062245 cocktail shaker
n03063599 coffee mug
n03063689 coffeepot
n03065424 coil, spiral, volute, whorl, helix
n03075370 combination lock
n03085013 computer keyboard, keypad
n03089624 confectionery, confectionary, candy store
n03095699 container ship, containership, container vessel
n03100240 convertible
n03109150 corkscrew, bottle screw
n03110669 cornet, horn, trumpet, trump
n03124043 cowboy boot
n03124170 cowboy hat, ten-gallon hat
n03125729 cradle
n03126707 crane
n03127747 crash helmet
n03127925 crate
n03131574 crib, cot
n03133878 Crock Pot
n03134739 croquet ball
n03141823 crutch
n03146219 cuirass
n03160309 dam, dike, dyke
n03179701 desk
n03180011 desktop computer
n03187595 dial telephone, dial phone
n03188531 diaper, nappy, napkin
n03196217 digital clock
n03197337 digital watch
n03201208 dining table, board
n03207743 dishrag, dishcloth
n03207941 dishwasher, dish washer, dishwashing machine
n03208938 disk brake, disc brake
n03216828 dock, dockage, docking facility
n03218198 dogsled, dog sled, dog sleigh
n03220513 dome
n03223299 doormat, welcome mat
n03240683 drilling platform, offshore rig
n03249569 drum, membranophone, tympan
n03250847 drumstick
n03255030 dumbbell
n03259280 Dutch oven
n03271574 electric fan, blower
n03272010 electric guitar
n03272562 electric locomotive
n03290653 entertainment center
n03291819 envelope
n03297495 espresso maker
n03314780 face powder
n03325584 feather boa, boa
n03337140 file, file cabinet, filing cabinet
n03344393 fireboat
n03345487 fire engine, fire truck
n03347037 fire screen, fireguard
n03355925 flagpole, flagstaff
n03372029 flute, transverse flute
n03376595 folding chair
n03379051 football helmet
n03384352 forklift
n03388043 fountain
n03388183 fountain pen
n03388549 four-poster
n03393912 freight car
n03394916 French horn, horn
n03400231 frying pan, frypan, skillet
n03404251 fur coat
n03417042 garbage truck, dustcart
n03424325 gasmask, respirator, gas helmet
n03425413 gas pump, gasoline pump, petrol pump, island dispenser
n03443371 goblet
n03444034 go-kart
n03445777 golf ball
n03445924 golfcart, golf cart
n03447447 gondola
n03447721 gong, tam-tam
n03450230 gown
n03452741 grand piano, grand
n03457902 greenhouse, nursery, glasshouse
n03459775 grille, radiator grille
n03461385 grocery store, grocery, food market, market
n03467068 guillotine
n03476684 hair slide
n03476991 hair spray
n03478589 half track
n03481172 hammer
n03482405 hamper
n03483316 hand blower, blow dryer, blow drier, hair dryer, hair drier
n03485407 hand-held computer, hand-held microcomputer
n03485794 handkerchief, hankie, hanky, hankey
n03492542 hard disc, hard disk, fixed disk
n03494278 harmonica, mouth organ, harp, mouth harp
n03495258 harp
n03496892 harvester, reaper
n03498962 hatchet
n03527444 holster
n03529860 home theater, home theatre
n03530642 honeycomb
n03532672 hook, claw
n03534580 hoopskirt, crinoline
n03535780 horizontal bar, high bar
n03538406 horse cart, horse-cart
n03544143 hourglass
n03584254 iPod
n03584829 iron, smoothing iron
n03590841 jack-o'-lantern
n03594734 jean, blue jean, denim
n03594945 jeep, landrover
n03595614 jersey, T-shirt, tee shirt
n03598930 jigsaw puzzle
n03599486 jinrikisha, ricksha, rickshaw
n03602883 joystick
n03617480 kimono
n03623198 knee pad
n03627232 knot
n03630383 lab coat, laboratory coat
n03633091 ladle
n03637318 lampshade, lamp shade
n03642806 laptop, laptop computer
n03649909 lawn mower, mower
n03657121 lens cap, lens cover
n03658185 letter opener, paper knife, paperknife
n03661043 library
n03662601 lifeboat
n03666591 lighter, light, igniter, ignitor
n03670208 limousine, limo
n03673027 liner, ocean liner
n03676483 lipstick, lip rouge
n03680355 Loafer
n03690938 lotion
n03691459 loudspeaker, speaker, speaker unit, loudspeaker system, speaker system
n03692522 loupe, jeweler's loupe
n03697007 lumbermill, sawmill
n03706229 magnetic compass
n03709823 mailbag, postbag
n03710193 mailbox, letter box
n03710637 maillot
n03710721 maillot, tank suit
n03717622 manhole cover
n03720891 maraca
n03721384 marimba, xylophone
n03724870 mask
n03729826 matchstick
n03733131 maypole
n03733281 maze, labyrinth
n03733805 measuring cup
n03742115 medicine chest, medicine cabinet
n03743016 megalith, megalithic structure
n03759954 microphone, mike
n03761084 microwave, microwave oven
n03763968 military uniform
n03764736 milk can
n03769881 minibus
n03770439 miniskirt, mini
n03770679 minivan
n03773504 missile
n03775071 mitten
n03775546 mixing bowl
n03776460 mobile home, manufactured home
n03777568 Model T
n03777754 modem
n03781244 monastery
n03782006 monitor
n03785016 moped
n03786901 mortar
n03787032 mortarboard
n03788195 mosque
n03788365 mosquito net
n03791053 motor scooter, scooter
n03792782 mountain bike, all-terrain bike, off-roader
n03792972 mountain tent
n03793489 mouse, computer mouse
n03794056 mousetrap
n03796401 moving van
n03803284 muzzle
n03804744 nail
n03814639 neck brace
n03814906 necklace
n03825788 nipple
n03832673 notebook, notebook computer
n03837869 obelisk
n03838899 oboe, hautboy, hautbois
n03840681 ocarina, sweet potato
n03841143 odometer, hodometer, mileometer, milometer
n03843555 oil filter
n03854065 organ, pipe organ
n03857828 oscilloscope, scope, cathode-ray oscilloscope, CRO
n03866082 overskirt
n03868242 oxcart
n03868863 oxygen mask
n03871628 packet
n03873416 paddle, boat paddle
n03874293 paddlewheel, paddle wheel
n03874599 padlock
n03876231 paintbrush
n03877472 pajama, pyjama, pj's, jammies
n03877845 palace
n03884397 panpipe, pandean pipe, syrinx
n03887697 paper towel
n03888257 parachute, chute
n03888605 parallel bars, bars
n03891251 park bench
n03891332 parking meter
n03895866 passenger car, coach, carriage
n03899768 patio, terrace
n03902125 pay-phone, pay-station
n03903868 pedestal, plinth, footstall
n03908618 pencil box, pencil case
n03908714 pencil sharpener
n03916031 perfume, essence
n03920288 Petri dish
n03924679 photocopier
n03929660 pick, plectrum, plectron
n03929855 pickelhaube
n03930313 picket fence, paling
n03930630 pickup, pickup truck
n03933933 pier
n03935335 piggy bank, penny bank
n03937543 pill bottle
n03938244 pillow
n03942813 ping-pong ball
n03944341 pinwheel
n03947888 pirate, pirate ship
n03950228 pitcher, ewer
n03954731 plane, carpenter's plane, woodworking plane
n03956157 planetarium
n03958227 plastic bag
n03961711 plate rack
n03967562 plow, plough
n03970156 plunger, plumber's helper
n03976467 Polaroid camera, Polaroid Land camera
n03976657 pole
n03977966 police van, police wagon, paddy wagon, patrol wagon, wagon, black Maria
n03980874 poncho
n03982430 pool table, billiard table, snooker table
n03983396 pop bottle, soda bottle
n03991062 pot, flowerpot
n03992509 potter's wheel
n03995372 power drill
n03998194 prayer rug, prayer mat
n04004767 printer
n04005630 prison, prison house
n04008634 projectile, missile
n04009552 projector
n04019541 puck, hockey puck
n04023962 punching bag, punch bag, punching ball, punchball
n04026417 purse
n04033901 quill, quill pen
n04033995 quilt, comforter, comfort, puff
n04037443 racer, race car, racing car
n04039381 racket, racquet
n04040759 radiator
n04041544 radio, wireless
n04044716 radio telescope, radio reflector
n04049303 rain barrel
n04065272 recreational vehicle, RV, R.V.
n04067472 reel
n04069434 reflex camera
n04070727 refrigerator, icebox
n04074963 remote control, remote
n04081281 restaurant, eating house, eating place, eatery
n04086273 revolver, six-gun, six-shooter
n04090263 rifle
n04099969 rocking chair, rocker
n04111531 rotisserie
n04116512 rubber eraser, rubber, pencil eraser
n04118538 rugby ball
n04118776 rule, ruler
n04120489 running shoe
n04125021 safe
n04127249 safety pin
n04131690 saltshaker, salt shaker
n04133789 sandal
n04136333 sarong
n04141076 sax, saxophone
n04141327 scabbard
n04141975 scale, weighing machine
n04146614 school bus
n04147183 schooner
n04149813 scoreboard
n04152593 screen, CRT screen
n04153751 screw
n04154565 screwdriver
n04162706 seat belt, seatbelt
n04179913 sewing machine
n04192698 shield, buckler
n04200800 shoe shop, shoe-shop, shoe store
n04201297 shoji
n04204238 shopping basket
n04204347 shopping cart
n04208210 shovel
n04209133 shower cap
n04209239 shower curtain
n04228054 ski
n04229816 ski mask
n04235860 sleeping bag
n04238763 slide rule, slipstick
n04239074 sliding door
n04243546 slot, one-armed bandit
n04251144 snorkel
n04252077 snowmobile
n04252225 snowplow, snowplough
n04254120 soap dispenser
n04254680 soccer ball
n04254777 sock
n04258138 solar dish, solar collector, solar furnace
n04259630 sombrero
n04263257 soup bowl
n04264628 space bar
n04265275 space heater
n04266014 space shuttle
n04270147 spatula
n04273569 speedboat
n04275548 spider web, spider's web
n04277352 spindle
n04285008 sports car, sport car
n04286575 spotlight, spot
n04296562 stage
n04310018 steam locomotive
n04311004 steel arch bridge
n04311174 steel drum
n04317175 stethoscope
n04325704 stole
n04326547 stone wall
n04328186 stopwatch, stop watch
n04330267 stove
n04332243 strainer
n04335435 streetcar, tram, tramcar, trolley, trolley car
n04336792 stretcher
n04344873 studio couch, day bed
n04346328 stupa, tope
n04347754 submarine, pigboat, sub, U-boat
n04350905 suit, suit of clothes
n04355338 sundial
n04355933 sunglass
n04356056 sunglasses, dark glasses, shades
n04357314 sunscreen, sunblock, sun blocker
n04366367 suspension bridge
n04367480 swab, swob, mop
n04370456 sweatshirt
n04371430 swimming trunks, bathing trunks
n04371774 swing
n04372370 switch, electric switch, electrical switch
n04376876 syringe
n04380533 table lamp
n04389033 tank, army tank, armored combat vehicle, armoured combat vehicle
n04392985 tape player
n04398044 teapot
n04399382 teddy, teddy bear
n04404412 television, television system
n04409515 tennis ball
n04417672 thatch, thatched roof
n04418357 theater curtain, theatre curtain
n04423845 thimble
n04428191 thresher, thrasher, threshing machine
n04429376 throne
n04435653 tile roof
n04442312 toaster
n04443257 tobacco shop, tobacconist shop, tobacconist
n04447861 toilet seat
n04456115 torch
n04458633 totem pole
n04461696 tow truck, tow car, wrecker
n04462240 toyshop
n04465501 tractor
n04467665 trailer truck, tractor trailer, trucking rig, rig, articulated lorry, semi
n04476259 tray
n04479046 trench coat
n04482393 tricycle, trike, velocipede
n04483307 trimaran
n04485082 tripod
n04486054 triumphal arch
n04487081 trolleybus, trolley coach, trackless trolley
n04487394 trombone
n04493381 tub, vat
n04501370 turnstile
n04505470 typewriter keyboard
n04507155 umbrella
n04509417 unicycle, monocycle
n04515003 upright, upright piano
n04517823 vacuum, vacuum cleaner
n04522168 vase
n04523525 vault
n04525038 velvet
n04525305 vending machine
n04532106 vestment
n04532670 viaduct
n04536866 violin, fiddle
n04540053 volleyball
n04542943 waffle iron
n04548280 wall clock
n04548362 wallet, billfold, notecase, pocketbook
n04550184 wardrobe, closet, press
n04552348 warplane, military plane
n04553703 washbasin, handbasin, washbowl, lavabo, wash-hand basin
n04554684 washer, automatic washer, washing machine
n04557648 water bottle
n04560804 water jug
n04562935 water tower
n04579145 whiskey jug
n04579432 whistle
n04584207 wig
n04589890 window screen
n04590129 window shade
n04591157 Windsor tie
n04591713 wine bottle
n04592741 wing
n04596742 wok
n04597913 wooden spoon
n04599235 wool, woolen, woollen
n04604644 worm fence, snake fence, snake-rail fence, Virginia fence
n04606251 wreck
n04612504 yawl
n04613696 yurt
n06359193 web site, website, internet site, site
n06596364 comic book
n06785654 crossword puzzle, crossword
n06794110 street sign
n06874185 traffic light, traffic signal, stoplight
n07248320 book jacket, dust cover, dust jacket, dust wrapper
n07565083 menu
n07579787 plate
n07583066 guacamole
n07584110 consomme
n07590611 hot pot, hotpot
n07613480 trifle
n07614500 ice cream, icecream
n07615774 ice lolly, lolly, lollipop, popsicle
n07684084 French loaf
n07693725 bagel, beigel
n07695742 pretzel
n07697313 cheeseburger
n07697537 hotdog, hot dog, red hot
n07711569 mashed potato
n07714571 head cabbage
n07714990 broccoli
n07715103 cauliflower
n07716358 zucchini, courgette
n07716906 spaghetti squash
n07717410 acorn squash
n07717556 butternut squash
n07718472 cucumber, cuke
n07718747 artichoke, globe artichoke
n07720875 bell pepper
n07730033 cardoon
n07734744 mushroom
n07742313 Granny Smith
n07745940 strawberry
n07747607 orange
n07749582 lemon
n07753113 fig
n07753275 pineapple, ananas
n07753592 banana
n07754684 jackfruit, jak, jack
n07760859 custard apple
n07768694 pomegranate
n07802026 hay
n07831146 carbonara
n07836838 chocolate sauce, chocolate syrup
n07860988 dough
n07871810 meat loaf, meatloaf
n07873807 pizza, pizza pie
n07875152 potpie
n07880968 burrito
n07892512 red wine
n07920052 espresso
n07930864 cup
n07932039 eggnog
n09193705 alp
n09229709 bubble
n09246464 cliff, drop, drop-off
n09256479 coral reef
n09288635 geyser
n09332890 lakeside, lakeshore
n09399592 promontory, headland, head, foreland
n09421951 sandbar, sand bar
n09428293 seashore, coast, seacoast, sea-coast
n09468604 valley, vale
n09472597 volcano
n09835506 ballplayer, baseball player
n10148035 groom, bridegroom
n10565667 scuba diver
n11879895 rapeseed
n11939491 daisy
n12057211 yellow lady's slipper, yellow lady-slipper, Cypripedium calceolus, Cypripedium parviflorum
n12144580 corn
n12267677 acorn
n12620546 hip, rose hip, rosehip
n12768682 buckeye, horse chestnut, conker
n12985857 coral fungus
n12998815 agaric
n13037406 gyromitra
n13040303 stinkhorn, carrion fungus
n13044778 earthstar
n13052670 hen-of-the-woods, hen of the woods, Polyporus frondosus, Grifola frondosa
n13054560 bolete
n13133613 ear, spike, capitulum
n15075141 toilet tissue, toilet paper, bathroom tissue
``` |
jamescalam/reddit-topics | 2022-04-28T18:14:19.000Z | [
"region:us"
] | jamescalam | null | null | null | 2 | 17 | Entry not found |
valurank/News_Articles_Categorization | 2023-08-27T05:49:31.000Z | [
"task_categories:text-classification",
"task_ids:multi-class-classification",
"multilinguality:monolingual",
"language:en",
"license:other",
"region:us"
] | valurank | null | null | null | 0 | 17 | ---
license:
- other
language:
- en
multilinguality:
- monolingual
task_categories:
- text-classification
task_ids:
- multi-class-classification
---
# Dataset Card for News_Articles_Categorization
## Table of Contents
- [Dataset Description](#dataset-description)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Source Data](#source-data)
## Dataset Description
3722 News Articles classified into different categories namely: World, Politics, Tech, Entertainment, Sport, Business, Health, and Science
## Languages
The text in the dataset is in English
## Dataset Structure
The dataset consists of two columns namely Text and Category.
The Text column consists of the news article and the Category column consists of the class each article belongs to
## Source Data
The dataset is scrapped across different news platforms
|
eugenetanjc/speech_accent_1000 | 2022-06-23T13:58:26.000Z | [
"region:us"
] | eugenetanjc | null | null | null | 0 | 17 | Entry not found |
Paul/hatecheck-french | 2022-07-05T10:40:23.000Z | [
"task_categories:text-classification",
"task_ids:hate-speech-detection",
"annotations_creators:crowdsourced",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:fr",
"license:cc-by-4.0",
"arxiv:2206.09917",
"regi... | Paul | null | null | null | 0 | 17 | ---
annotations_creators:
- crowdsourced
language_creators:
- expert-generated
language:
- fr
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: French HateCheck
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- hate-speech-detection
---
# Dataset Card for Multilingual HateCheck
## Dataset Description
Multilingual HateCheck (MHC) is a suite of functional tests for hate speech detection models in 10 different languages: Arabic, Dutch, French, German, Hindi, Italian, Mandarin, Polish, Portuguese and Spanish.
For each language, there are 25+ functional tests that correspond to distinct types of hate and challenging non-hate.
This allows for targeted diagnostic insights into model performance.
For more details, please refer to our paper about MHC, published at the 2022 Workshop on Online Abuse and Harms (WOAH) at NAACL 2022. If you are using MHC, please cite our work!
- **Paper:** Röttger et al. (2022) - Multilingual HateCheck: Functional Tests for Multilingual Hate Speech Detection Models. https://arxiv.org/abs/2206.09917
- **Repository:** https://github.com/rewire-online/multilingual-hatecheck
- **Point of Contact:** paul@rewire.online
## Dataset Structure
The csv format mostly matches the original HateCheck data, with some adjustments for specific languages.
**mhc_case_id**
The test case ID that is unique to each test case across languages (e.g., "mandarin-1305")
**functionality**
The shorthand for the functionality tested by the test case (e.g, "target_obj_nh"). The same functionalities are tested in all languages, except for Mandarin and Arabic, where non-Latin script required adapting the tests for spelling variations.
**test_case**
The test case text.
**label_gold**
The gold standard label ("hateful" or "non-hateful") of the test case. All test cases within a given functionality have the same gold standard label.
**target_ident**
Where applicable, the protected group that is targeted or referenced in the test case. All HateChecks cover seven target groups, but their composition varies across languages.
**ref_case_id**
For hateful cases, where applicable, the ID of the hateful case which was perturbed to generate this test case. For non-hateful cases, where applicable, the ID of the hateful case which is contrasted by this test case.
**ref_templ_id**
The equivalent to ref_case_id, but for template IDs.
**templ_id**
The ID of the template from which the test case was generated.
**case_templ**
The template from which the test case was generated (where applicable).
**gender_male** and **gender_female**
For gender-inflected languages (French, Spanish, Portuguese, Hindi, Arabic, Italian, Polish, German), only for cases where gender inflection is relevant, separate entries for gender_male and gender_female replace case_templ.
**label_annotated**
A list of labels given by the three annotators who reviewed the test case (e.g., "['hateful', 'hateful', 'hateful']").
**label_annotated_maj**
The majority vote of the three annotators (e.g., "hateful"). In some cases this differs from the gold label given by our language experts.
**disagreement_in_case**
True if label_annotated_maj does not match label_gold for the entry.
**disagreement_in_template**
True if the test case is generated from an IDENT template and there is at least one case with disagreement_in_case generated from the same template. This can be used to exclude entire templates from MHC. |
pinecone/image-set | 2022-07-07T15:33:29.000Z | [
"license:cc-by-4.0",
"region:us"
] | pinecone | null | null | null | 1 | 17 | ---
license: cc-by-4.0
---
|
embedding-data/coco_captions_quintets | 2022-08-02T02:18:54.000Z | [
"task_categories:sentence-similarity",
"task_ids:semantic-similarity-classification",
"language:en",
"license:mit",
"arxiv:1405.0312",
"region:us"
] | embedding-data | null | null | null | 3 | 17 | ---
license: mit
language:
- en
paperswithcode_id: embedding-data/coco_captions
pretty_name: coco_captions
task_categories:
- sentence-similarity
- paraphrase-mining
task_ids:
- semantic-similarity-classification
---
# Dataset Card for "coco_captions"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://cocodataset.org/#home](https://cocodataset.org/#home)
- **Repository:** [https://github.com/cocodataset/cocodataset.github.io](https://github.com/cocodataset/cocodataset.github.io)
- **Paper:** [More Information Needed](https://arxiv.org/abs/1405.0312)
- **Point of Contact:** [info@cocodataset.org](info@cocodataset.org)
- **Size of downloaded dataset files:**
- **Size of the generated dataset:**
- **Total amount of disk used:** 6.32 MB
### Dataset Summary
COCO is a large-scale object detection, segmentation, and captioning dataset. This repo contains five captions per image; useful for sentence similarity tasks.
Disclaimer: The team releasing COCO did not upload the dataset to the Hub and did not write a dataset card.
These steps were done by the Hugging Face team.
### Supported Tasks
- [Sentence Transformers](https://huggingface.co/sentence-transformers) training; useful for semantic search and sentence similarity.
### Languages
- English.
## Dataset Structure
Each example in the dataset contains quintets of similar sentences and is formatted as a dictionary with the key "set" and a list with the sentences as "value":
```
{"set": [sentence_1, sentence_2, sentence3, sentence4, sentence5]}
{"set": [sentence_1, sentence_2, sentence3, sentence4, sentence5]}
...
{"set": [sentence_1, sentence_2, sentence3, sentence4, sentence5]}
```
This dataset is useful for training Sentence Transformers models. Refer to the following post on how to train models using similar pairs of sentences.
### Usage Example
Install the 🤗 Datasets library with `pip install datasets` and load the dataset from the Hub with:
```python
from datasets import load_dataset
dataset = load_dataset("embedding-data/coco_captions")
```
The dataset is loaded as a `DatasetDict` and has the format:
```python
DatasetDict({
train: Dataset({
features: ['set'],
num_rows: 82783
})
})
```
Review an example `i` with:
```python
dataset["train"][i]["set"]
```
### Data Instances
[More Information Needed](https://cocodataset.org/#format-data)
### Data Splits
[More Information Needed](https://cocodataset.org/#format-data)
## Dataset Creation
### Curation Rationale
[More Information Needed](https://cocodataset.org/#home)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://cocodataset.org/#home)
#### Who are the source language producers?
[More Information Needed](https://cocodataset.org/#home)
### Annotations
#### Annotation process
[More Information Needed](https://cocodataset.org/#home)
#### Who are the annotators?
[More Information Needed](https://cocodataset.org/#home)
### Personal and Sensitive Information
[More Information Needed](https://cocodataset.org/#home)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://cocodataset.org/#home)
### Discussion of Biases
[More Information Needed](https://cocodataset.org/#home)
### Other Known Limitations
[More Information Needed](https://cocodataset.org/#home)
## Additional Information
### Dataset Curators
[More Information Needed](https://cocodataset.org/#home)
### Licensing Information
The annotations in this dataset along with this website belong to the COCO Consortium
and are licensed under a [Creative Commons Attribution 4.0 License](https://creativecommons.org/licenses/by/4.0/legalcode)
### Citation Information
[More Information Needed](https://cocodataset.org/#home)
### Contributions
Thanks to:
- Tsung-Yi Lin - Google Brain
- Genevieve Patterson - MSR, Trash TV
- Matteo R. - Ronchi Caltech
- Yin Cui - Google
- Michael Maire - TTI-Chicago
- Serge Belongie - Cornell Tech
- Lubomir Bourdev - WaveOne, Inc.
- Ross Girshick - FAIR
- James Hays - Georgia Tech
- Pietro Perona - Caltech
- Deva Ramanan - CMU
- Larry Zitnick - FAIR
- Piotr Dollár - FAIR
for adding this dataset.
|
Siyong/speech_timit | 2022-07-13T00:19:49.000Z | [
"region:us"
] | Siyong | null | null | null | 0 | 17 | Entry not found |
succinctly/midjourney-prompts | 2022-07-22T01:49:16.000Z | [
"license:apache-2.0",
"region:us"
] | succinctly | null | null | null | 77 | 17 | ---
license: apache-2.0
---
[Midjourney](https://midjourney.com) is an independent research lab whose broad mission is to "explore new mediums of thought". In 2022, they launched a text-to-image service that, given a natural language prompt, produces visual depictions that are faithful to the description. Their service is accessible via a public [Discord server](https://discord.com/invite/midjourney): users issue a query in natural language, and the Midjourney bot returns AI-generated images that follow the given description. The raw dataset (with Discord messages) can be found on Kaggle: [Midjourney User Prompts & Generated Images (250k)](https://www.kaggle.com/datasets/succinctlyai/midjourney-texttoimage). The authors of the scraped dataset have no affiliation to Midjourney.
This HuggingFace dataset was [processed](https://www.kaggle.com/code/succinctlyai/midjourney-text-prompts-huggingface) from the raw Discord messages to solely include the text prompts issued by the user (thus excluding the generated images and any other metadata). It could be used, for instance, to fine-tune a large language model to produce or auto-complete creative prompts for image generation.
Check out [succinctly/text2image-prompt-generator](https://huggingface.co/succinctly/text2image-prompt-generator), a GPT-2 model fine-tuned on this dataset. |
graphs-datasets/IMDB-BINARY | 2023-02-07T16:39:00.000Z | [
"task_categories:graph-ml",
"license:unknown",
"region:us"
] | graphs-datasets | null | null | null | 1 | 17 | ---
license: unknown
task_categories:
- graph-ml
---
# Dataset Card for IMDB-BINARY (IMDb-B)
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [External Use](#external-use)
- [PyGeometric](#pygeometric)
- [Dataset Structure](#dataset-structure)
- [Data Properties](#data-properties)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **[Homepage](https://dl.acm.org/doi/10.1145/2783258.2783417)**
- **[Repository](https://www.chrsmrrs.com/graphkerneldatasets/IMDB-BINARY.zip):**:
- **Paper:**: Deep Graph Kernels (see citation)
- **Leaderboard:**: [Papers with code leaderboard](https://paperswithcode.com/sota/graph-classification-on-imdb-b)
### Dataset Summary
The `IMDb-B` dataset is "a movie collaboration dataset that consists of the ego-networks of 1,000 actors/actresses who played roles in movies in IMDB. In each graph, nodes represent actors/actress, and there is an edge between them if they appear in the same movie. These graphs are derived from the Action and Romance genres".
### Supported Tasks and Leaderboards
`IMDb-B` should be used for graph classification (aiming to predict whether a movie graph is an action or romance movie), a binary classification task. The score used is accuracy, using a 10-fold cross-validation.
## External Use
### PyGeometric
To load in PyGeometric, do the following:
```python
from datasets import load_dataset
from torch_geometric.data import Data
from torch_geometric.loader import DataLoader
dataset_hf = load_dataset("graphs-datasets/<mydataset>")
# For the train set (replace by valid or test as needed)
dataset_pg_list = [Data(graph) for graph in dataset_hf["train"]]
dataset_pg = DataLoader(dataset_pg_list)
```
## Dataset Structure
### Data Properties
| property | value |
|---|---|
| scale | medium |
| #graphs | 1000 |
| average #nodes | 19.79 |
| average #edges | 193.25 |
### Data Fields
Each row of a given file is a graph, with:
- `edge_index` (list: 2 x #edges): pairs of nodes constituting edges
- `y` (list: 1 x #labels): contains the number of labels available to predict (here 1, equal to zero or one)
- `num_nodes` (int): number of nodes of the graph
### Data Splits
This data comes from the PyGeometric version of the dataset.
This information can be found back using
```python
from torch_geometric.datasets import TUDataset
cur_dataset = TUDataset(root="../dataset/loaded/",
name="IMDB-BINARY")
```
## Additional Information
### Licensing Information
The dataset has been released under unknown license, please open an issue if you have this information.
### Citation Information
```
@inproceedings{10.1145/2783258.2783417,
author = {Yanardag, Pinar and Vishwanathan, S.V.N.},
title = {Deep Graph Kernels},
year = {2015},
isbn = {9781450336642},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/2783258.2783417},
doi = {10.1145/2783258.2783417},
abstract = {In this paper, we present Deep Graph Kernels, a unified framework to learn latent representations of sub-structures for graphs, inspired by latest advancements in language modeling and deep learning. Our framework leverages the dependency information between sub-structures by learning their latent representations. We demonstrate instances of our framework on three popular graph kernels, namely Graphlet kernels, Weisfeiler-Lehman subtree kernels, and Shortest-Path graph kernels. Our experiments on several benchmark datasets show that Deep Graph Kernels achieve significant improvements in classification accuracy over state-of-the-art graph kernels.},
booktitle = {Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining},
pages = {1365–1374},
numpages = {10},
keywords = {collaboration networks, bioinformatics, r-convolution kernels, graph kernels, structured data, deep learning, social networks, string kernels},
location = {Sydney, NSW, Australia},
series = {KDD '15}
}
```
### Contributions
Thanks to [@clefourrier](https://github.com/clefourrier) for adding this dataset. |
opentargets/clinical_trial_reason_to_stop | 2022-12-12T08:57:19.000Z | [
"task_categories:text-classification",
"task_ids:multi-class-classification",
"task_ids:multi-label-classification",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
... | opentargets | null | null | null | 6 | 17 | ---
annotations_creators:
- expert-generated
language:
- en
language_creators:
- expert-generated
license:
- apache-2.0
multilinguality:
- monolingual
pretty_name: clinical_trial_reason_to_stop
size_categories:
- 1K<n<10K
source_datasets:
- original
tags:
- bio
- research papers
- clinical trial
- drug development
task_categories:
- text-classification
task_ids:
- multi-class-classification
- multi-label-classification
---
# Dataset Card for Clinical Trials's Reason to Stop
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://www.opentargets.org
- **Repository:** https://github.com/LesyaR/stopReasons
- **Paper:**
- **Point of Contact:** data@opentargets.org
### Dataset Summary
This dataset contains a curated classification of more than 5000 reasons why a clinical trial has suffered an early stop.
The text has been extracted from clinicaltrials.gov, the largest resource of clinical trial information. The text has been curated by members of the Open Targets organisation, a project aimed at providing data relevant to drug development.
All 17 possible classes have been carefully defined:
- Business_Administrative
- Another_Study
- Negative
- Study_Design
- Invalid_Reason
- Ethical_Reason
- Insufficient_Data
- Insufficient_Enrollment
- Study_Staff_Moved
- Endpoint_Met
- Regulatory
- Logistics_Resources
- Safety_Sideeffects
- No_Context
- Success
- Interim_Analysis
- Covid19
### Supported Tasks and Leaderboards
Multi class classification
### Languages
English
## Dataset Structure
### Data Instances
```json
{'text': 'Due to company decision to focus resources on a larger, controlled study in this patient population."',
'label': 'Another_Study'}
```
### Data Fields
`text`: contains the reason for the CT early stop
`label`: contains one of the 17 defined classes
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
This dataset has an Apache 2.0 license.
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@ireneisdoomed](https://github.com/<github-username>) for adding this dataset. |
ArneBinder/xfund | 2022-09-21T15:12:34.000Z | [
"license:cc-by-nc-sa-4.0",
"region:us"
] | ArneBinder | null | null | null | 1 | 17 | ---
license: cc-by-nc-sa-4.0
---
|
copenlu/spiced | 2022-10-24T12:31:04.000Z | [
"task_categories:text-classification",
"task_ids:text-scoring",
"task_ids:semantic-similarity-scoring",
"annotations_creators:crowdsourced",
"annotations_creators:machine-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:extended|s2orc... | copenlu | null | null | null | 2 | 17 | ---
annotations_creators:
- crowdsourced
- machine-generated
language:
- en
language_creators:
- found
license:
- mit
multilinguality:
- monolingual
paperswithcode_id: null
pretty_name: SPICED
size_categories:
- 1K<n<10K
source_datasets:
- extended|s2orc
tags:
- scientific text
- scholarly text
- semantic text similarity
- fact checking
- misinformation
task_categories:
- text-classification
task_ids:
- text-scoring
- semantic-similarity-scoring
---
# Dataset Card for SPICED
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** http://www.copenlu.com/publication/2022_emnlp_wright/
- **Repository:** https://github.com/copenlu/scientific-information-change
- **Paper:**
### Dataset Summary
The Scientific Paraphrase and Information ChangE Dataset (SPICED) is a dataset of paired scientific findings from scientific papers, news media, and Twitter. The types of pairs are between <paper, news> and <paper, tweet>. Each pair is labeled for the degree of information similarity in the _findings_ described by each sentence, on a scale from 1-5. This is called the _Information Matching Score (IMS)_. The data was curated from S2ORC and matched news articles and Tweets using Altmetric. Instances are annotated by experts using the Prolific platform and Potato. Please use the following citation when using this dataset:
```
@article{modeling-information-change,
title={{Modeling Information Change in Science Communication with Semantically Matched Paraphrases}},
author={Wright, Dustin and Pei, Jiaxin and Jurgens, David and Augenstein, Isabelle},
year={2022},
booktitle = {Proceedings of EMNLP},
publisher = {Association for Computational Linguistics},
year = 2022
}
```
### Supported Tasks and Leaderboards
The task is to predict the IMS between two scientific sentences, which is a scalar between 1 and 5. Preferred metrics are mean-squared error and Pearson correlation.
### Languages
English
## Dataset Structure
### Data Fields
- DOI: The DOI of the original scientific article
- instance\_id: Unique instance ID for the sample. The ID contains the field, whether or not it is a tweet, and whether or not the sample was manually labeled or automatically using SBERT (marked as "easy")
- News Finding: Text of the news or tweet finding
- Paper Finding: Text of the paper finding
- News Context: For news instances, the surrounding two sentences for the news finding. For tweets, a copy of the tweet
- Paper Context: The surrounding two sentences for the paper finding
- scores: Annotator scores after removing low competence annotators
- field: The academic field of the paper ('Computer\_Science', 'Medicine', 'Biology', or 'Psychology')
- split: The dataset split ('train', 'val', or 'test')
- final\_score: The IMS of the instance
- source: Either "news" or "tweet"
- News Url: A URL to the source article if a news instance or the tweet ID of a tweet
### Data Splits
- train: 4721 instances
- validation: 664 instances
- test: 640 instances
## Dataset Creation
For the full details of how the dataset was created, please refer to our [EMNLP 2022 paper]().
### Curation Rationale
Science communication is a complex process of translation from highly technical scientific language to common language that lay people can understand. At the same time, the general public relies on good science communication in order to inform critical decisions about their health and behavior. SPICED was curated in order to provide a training dataset and benchmark for machine learning models to measure changes in scientific information at different stages of the science communication pipeline.
### Source Data
#### Initial Data Collection and Normalization
Scientific text: S2ORC
News articles and Tweets are collected through Altmetric.
#### Who are the source language producers?
Scientists, journalists, and Twitter users.
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
Models trained on SPICED can be used to perform large scale analyses of science communication. They can be used to match the same finding discussed in different media, and reveal trends in differences in reporting at different stages of the science communication pipeline. It is hoped that this can help to build tools which will improve science communication.
### Discussion of Biases
The dataset is restricted to computer science, medicine, biology, and psychology, which may introduce some bias in the topics which models will perform well on.
### Other Known Limitations
While some context is available, we do not release the full text of news articles and scientific papers, which may contain further context to help with learning the task. We do however provide the paper DOIs and links to the original news articles in case full text is desired.
## Additional Information
### Dataset Curators
Dustin Wright, Jiaxin Pei, David Jurgens, and Isabelle Augenstein
### Licensing Information
MIT
### Contributions
Thanks to [@dwright37](https://github.com/dwright37) for adding this dataset. |
arbml/MediaSpeech_ar | 2022-11-03T02:09:50.000Z | [
"region:us"
] | arbml | null | null | null | 0 | 17 | Entry not found |
shunk031/jsnli | 2022-12-12T07:36:58.000Z | [
"task_categories:text-classification",
"task_ids:natural-language-inference",
"task_ids:multi-input-text-classification",
"multilinguality:monolingual",
"language:ja",
"license:cc-by-sa-4.0",
"natural-language-inference",
"nli",
"jsnli",
"region:us"
] | shunk031 | == 日本語SNLI(JSNLI)データセット ==
SNLI コーパスを日本語に翻訳した自然言語推論データセット
学習データは元データを翻訳し、計算機によるフィルタリングによって作成
評価データは日本語として意味が通るか、翻訳後のラベルが元のラベルと一致しているかどうかの2段階のクラウドソーシングによりデータをフィルタリング | - 吉越 卓見, 河原 大輔, 黒橋 禎夫: 機械翻訳を用いた自然言語推論データセットの多言語化, 第244回自然言語処理研究会, (2020.7.3).
- Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large annotated corpus for learning natural language inference. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP).
- Peter Young, Alice Lai, Micah Hodosh, and Julia Hockenmaier. "From image descriptions to visual denotations: New similarity metrics for semantic inference over event descriptions." Transactions of the Association for Computational Linguistics 2 (2014): 67-78. | null | 3 | 17 | ---
language:
- ja
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
task_categories:
- text-classification
task_ids:
- natural-language-inference
- multi-input-text-classification
tags:
- natural-language-inference
- nli
- jsnli
datasets:
- without-filtering
- with-filtering
metrics:
- accuracy
---
# Dataset Card for JSNLI
[](https://github.com/shunk031/huggingface-datasets_jsnli/actions/workflows/ci.yaml)
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Dataset Preprocessing](#dataset-preprocessing)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- Homepage: https://nlp.ist.i.kyoto-u.ac.jp/?%E6%97%A5%E6%9C%AC%E8%AA%9ESNLI%28JSNLI%29%E3%83%87%E3%83%BC%E3%82%BF%E3%82%BB%E3%83%83%E3%83%88
- Repository: https://github.com/shunk031/huggingface-datasets_jsnli
### Dataset Summary
[日本語 SNLI(JSNLI) データセット - KUROHASHI-CHU-MURAWAKI LAB](https://nlp.ist.i.kyoto-u.ac.jp/?%E6%97%A5%E6%9C%AC%E8%AA%9ESNLI%28JSNLI%29%E3%83%87%E3%83%BC%E3%82%BF%E3%82%BB%E3%83%83%E3%83%88 ) より:
> 本データセットは自然言語推論 (NLI) の標準的ベンチマークである [SNLI](https://nlp.stanford.edu/projects/snli/) を日本語に翻訳したものです。
### Dataset Preprocessing
### Supported Tasks and Leaderboards
### Languages
注釈はすべて日本語を主要言語としています。
## Dataset Structure
> データセットは TSV フォーマットで、各行がラベル、前提、仮説の三つ組を表します。前提、仮説は JUMAN++ によって形態素分割されています。以下に例をあげます。
```
entailment 自転車 で 2 人 の 男性 が レース で 競い ます 。 人々 は 自転車 に 乗って います 。
```
### Data Instances
```python
from datasets import load_dataset
load_dataset("shunk031/jsnli", "without-filtering")
```
```json
{
'label': 'neutral',
'premise': 'ガレージ で 、 壁 に ナイフ を 投げる 男 。',
'hypothesis': '男 は 魔法 の ショー の ため に ナイフ を 投げる 行為 を 練習 して い ます 。'
}
```
### Data Fields
### Data Splits
| name | train | validation |
|-------------------|--------:|-----------:|
| without-filtering | 548,014 | 3,916 |
| with-filtering | 533,005 | 3,916 |
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
> SNLI に機械翻訳を適用した後、評価データにクラウドソーシングによる正確なフィルタリング、学習データに計算機による自動フィルタリングを施すことで構築されています。
> データセットは学習データを全くフィルタリングしていないものと、フィルタリングした中で最も精度が高かったものの 2 種類を公開しています。データサイズは、フィルタリング前の学習データが 548,014 ペア、フィルタリング後の学習データが 533,005 ペア、評価データは 3,916 ペアです。詳細は参考文献を参照してください。
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
> 本データセットに関するご質問は nl-resource あっと nlp.ist.i.kyoto-u.ac.jp 宛にお願いいたします。
### Dataset Curators
### Licensing Information
> このデータセットのライセンスは、SNLI のライセンスと同じ [CC BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/) に従います。SNLI に関しては参考文献を参照してください。
### Citation Information
```bibtex
@article{吉越卓見 2020 機械翻訳を用いた自然言語推論データセットの多言語化,
title={機械翻訳を用いた自然言語推論データセットの多言語化},
author={吉越卓見 and 河原大輔 and 黒橋禎夫 and others},
journal={研究報告自然言語処理 (NL)},
volume={2020},
number={6},
pages={1--8},
year={2020}
}
```
```bibtex
@inproceedings{bowman2015large,
title={A large annotated corpus for learning natural language inference},
author={Bowman, Samuel and Angeli, Gabor and Potts, Christopher and Manning, Christopher D},
booktitle={Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing},
pages={632--642},
year={2015}
}
```
```bibtex
@article{young2014image,
title={From image descriptions to visual denotations: New similarity metrics for semantic inference over event descriptions},
author={Young, Peter and Lai, Alice and Hodosh, Micah and Hockenmaier, Julia},
journal={Transactions of the Association for Computational Linguistics},
volume={2},
pages={67--78},
year={2014},
publisher={MIT Press}
}
```
### Contributions
JSNLI データセットを公開してくださった吉越 卓見さま,河原 大輔さま,黒橋 禎夫さまに心から感謝します。
|
lewtun/corgi | 2022-12-19T08:45:20.000Z | [
"region:us"
] | lewtun | null | null | null | 2 | 17 | ---
dataset_info:
features:
- name: image
dtype: image
splits:
- name: train
num_bytes: 5590698.0
num_examples: 5
download_size: 5591635
dataset_size: 5590698.0
---
# Dataset Card for "corgi"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
DavidVivancos/MindBigData2022_MNIST_MW | 2023-01-04T08:26:21.000Z | [
"license:odbl",
"region:us"
] | DavidVivancos | null | null | null | 0 | 17 | ---
license: odbl
---
|
Cohere/wikipedia-22-12-ko-embeddings | 2023-03-22T16:55:35.000Z | [
"task_categories:text-retrieval",
"task_ids:document-retrieval",
"multilinguality:multilingual",
"language:ko",
"license:apache-2.0",
"region:us"
] | Cohere | null | null | null | 2 | 17 | ---
language:
- ko
multilinguality:
- multilingual
size_categories: []
source_datasets: []
tags: []
task_categories:
- text-retrieval
license:
- apache-2.0
task_ids:
- document-retrieval
---
# Wikipedia (ko) embedded with cohere.ai `multilingual-22-12` encoder
We encoded [Wikipedia (ko)](https://ko.wikipedia.org) using the [cohere.ai](https://txt.cohere.ai/multilingual/) `multilingual-22-12` embedding model.
To get an overview how this dataset was created and pre-processed, have a look at [Cohere/wikipedia-22-12](https://huggingface.co/datasets/Cohere/wikipedia-22-12).
## Embeddings
We compute for `title+" "+text` the embeddings using our `multilingual-22-12` embedding model, a state-of-the-art model that works for semantic search in 100 languages. If you want to learn more about this model, have a look at [cohere.ai multilingual embedding model](https://txt.cohere.ai/multilingual/).
## Further languages
We provide embeddings of Wikipedia in many different languages:
[ar](https://huggingface.co/datasets/Cohere/wikipedia-22-12-ar-embeddings), [de](https://huggingface.co/datasets/Cohere/wikipedia-22-12-de-embeddings), [en](https://huggingface.co/datasets/Cohere/wikipedia-22-12-en-embeddings), [es](https://huggingface.co/datasets/Cohere/wikipedia-22-12-es-embeddings), [fr](https://huggingface.co/datasets/Cohere/wikipedia-22-12-fr-embeddings), [hi](https://huggingface.co/datasets/Cohere/wikipedia-22-12-hi-embeddings), [it](https://huggingface.co/datasets/Cohere/wikipedia-22-12-it-embeddings), [ja](https://huggingface.co/datasets/Cohere/wikipedia-22-12-ja-embeddings), [ko](https://huggingface.co/datasets/Cohere/wikipedia-22-12-ko-embeddings), [simple english](https://huggingface.co/datasets/Cohere/wikipedia-22-12-simple-embeddings), [zh](https://huggingface.co/datasets/Cohere/wikipedia-22-12-zh-embeddings),
You can find the Wikipedia datasets without embeddings at [Cohere/wikipedia-22-12](https://huggingface.co/datasets/Cohere/wikipedia-22-12).
## Loading the dataset
You can either load the dataset like this:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/wikipedia-22-12-ko-embeddings", split="train")
```
Or you can also stream it without downloading it before:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/wikipedia-22-12-ko-embeddings", split="train", streaming=True)
for doc in docs:
docid = doc['id']
title = doc['title']
text = doc['text']
emb = doc['emb']
```
## Search
A full search example:
```python
#Run: pip install cohere datasets
from datasets import load_dataset
import torch
import cohere
co = cohere.Client(f"<<COHERE_API_KEY>>") # Add your cohere API key from www.cohere.com
#Load at max 1000 documents + embeddings
max_docs = 1000
docs_stream = load_dataset(f"Cohere/wikipedia-22-12-ko-embeddings", split="train", streaming=True)
docs = []
doc_embeddings = []
for doc in docs_stream:
docs.append(doc)
doc_embeddings.append(doc['emb'])
if len(docs) >= max_docs:
break
doc_embeddings = torch.tensor(doc_embeddings)
query = 'Who founded Youtube'
response = co.embed(texts=[query], model='multilingual-22-12')
query_embedding = response.embeddings
query_embedding = torch.tensor(query_embedding)
# Compute dot score between query embedding and document embeddings
dot_scores = torch.mm(query_embedding, doc_embeddings.transpose(0, 1))
top_k = torch.topk(dot_scores, k=3)
# Print results
print("Query:", query)
for doc_id in top_k.indices[0].tolist():
print(docs[doc_id]['title'])
print(docs[doc_id]['text'], "\n")
```
## Performance
You can find performance on the MIRACL dataset (a semantic search evaluation dataset) here: [miracl-en-queries-22-12#performance](https://huggingface.co/datasets/Cohere/miracl-en-queries-22-12#performance) |
Cohere/wikipedia-22-12-ar-embeddings | 2023-03-22T16:52:28.000Z | [
"task_categories:text-retrieval",
"task_ids:document-retrieval",
"annotations_creators:expert-generated",
"multilinguality:multilingual",
"language:ar",
"license:apache-2.0",
"region:us"
] | Cohere | null | null | null | 2 | 17 | ---
annotations_creators:
- expert-generated
language:
- ar
multilinguality:
- multilingual
size_categories: []
source_datasets: []
tags: []
task_categories:
- text-retrieval
license:
- apache-2.0
task_ids:
- document-retrieval
---
# Wikipedia (ar) embedded with cohere.ai `multilingual-22-12` encoder
We encoded [Wikipedia (ar)](https://ar.wikipedia.org) using the [cohere.ai](https://txt.cohere.ai/multilingual/) `multilingual-22-12` embedding model.
To get an overview how this dataset was created and pre-processed, have a look at [Cohere/wikipedia-22-12](https://huggingface.co/datasets/Cohere/wikipedia-22-12).
## Embeddings
We compute for `title+" "+text` the embeddings using our `multilingual-22-12` embedding model, a state-of-the-art model that works for semantic search in 100 languages. If you want to learn more about this model, have a look at [cohere.ai multilingual embedding model](https://txt.cohere.ai/multilingual/).
## Further languages
We provide embeddings of Wikipedia in many different languages:
[ar](https://huggingface.co/datasets/Cohere/wikipedia-22-12-ar-embeddings), [de](https://huggingface.co/datasets/Cohere/wikipedia-22-12-de-embeddings), [en](https://huggingface.co/datasets/Cohere/wikipedia-22-12-en-embeddings), [es](https://huggingface.co/datasets/Cohere/wikipedia-22-12-es-embeddings), [fr](https://huggingface.co/datasets/Cohere/wikipedia-22-12-fr-embeddings), [hi](https://huggingface.co/datasets/Cohere/wikipedia-22-12-hi-embeddings), [it](https://huggingface.co/datasets/Cohere/wikipedia-22-12-it-embeddings), [ja](https://huggingface.co/datasets/Cohere/wikipedia-22-12-ja-embeddings), [ko](https://huggingface.co/datasets/Cohere/wikipedia-22-12-ko-embeddings), [simple english](https://huggingface.co/datasets/Cohere/wikipedia-22-12-simple-embeddings), [zh](https://huggingface.co/datasets/Cohere/wikipedia-22-12-zh-embeddings),
You can find the Wikipedia datasets without embeddings at [Cohere/wikipedia-22-12](https://huggingface.co/datasets/Cohere/wikipedia-22-12).
## Loading the dataset
You can either load the dataset like this:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/wikipedia-22-12-ar-embeddings", split="train")
```
Or you can also stream it without downloading it before:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/wikipedia-22-12-ar-embeddings", split="train", streaming=True)
for doc in docs:
docid = doc['id']
title = doc['title']
text = doc['text']
emb = doc['emb']
```
## Search
A full search example:
```python
#Run: pip install cohere datasets
from datasets import load_dataset
import torch
import cohere
co = cohere.Client(f"<<COHERE_API_KEY>>") # Add your cohere API key from www.cohere.com
#Load at max 1000 documents + embeddings
max_docs = 1000
docs_stream = load_dataset(f"Cohere/wikipedia-22-12-ar-embeddings", split="train", streaming=True)
docs = []
doc_embeddings = []
for doc in docs_stream:
docs.append(doc)
doc_embeddings.append(doc['emb'])
if len(docs) >= max_docs:
break
doc_embeddings = torch.tensor(doc_embeddings)
query = 'Who founded Youtube'
response = co.embed(texts=[query], model='multilingual-22-12')
query_embedding = response.embeddings
query_embedding = torch.tensor(query_embedding)
# Compute dot score between query embedding and document embeddings
dot_scores = torch.mm(query_embedding, doc_embeddings.transpose(0, 1))
top_k = torch.topk(dot_scores, k=3)
# Print results
print("Query:", query)
for doc_id in top_k.indices[0].tolist():
print(docs[doc_id]['title'])
print(docs[doc_id]['text'], "\n")
```
## Performance
You can find performance on the MIRACL dataset (a semantic search evaluation dataset) here: [miracl-en-queries-22-12#performance](https://huggingface.co/datasets/Cohere/miracl-en-queries-22-12#performance) |
keremberke/german-traffic-sign-detection | 2023-01-16T21:06:06.000Z | [
"task_categories:object-detection",
"roboflow",
"roboflow2huggingface",
"Self Driving",
"Transportation",
"region:us"
] | keremberke | null | @misc{ gtsdb---german-traffic-sign-detection-benchmark_dataset,
title = { GTSDB - German Traffic Sign Detection Benchmark Dataset },
type = { Open Source Dataset },
author = { Mohamed Traore },
howpublished = { \\url{ https://universe.roboflow.com/mohamed-traore-2ekkp/gtsdb---german-traffic-sign-detection-benchmark } },
url = { https://universe.roboflow.com/mohamed-traore-2ekkp/gtsdb---german-traffic-sign-detection-benchmark },
journal = { Roboflow Universe },
publisher = { Roboflow },
year = { 2022 },
month = { jul },
note = { visited on 2023-01-16 },
} | null | 2 | 17 | ---
task_categories:
- object-detection
tags:
- roboflow
- roboflow2huggingface
- Self Driving
- Transportation
---
<div align="center">
<img width="640" alt="keremberke/german-traffic-sign-detection" src="https://huggingface.co/datasets/keremberke/german-traffic-sign-detection/resolve/main/thumbnail.jpg">
</div>
### Dataset Labels
```
['animals', 'construction', 'cycles crossing', 'danger', 'no entry', 'pedestrian crossing', 'school crossing', 'snow', 'stop', 'bend', 'bend left', 'bend right', 'give way', 'go left', 'go left or straight', 'go right', 'go right or straight', 'go straight', 'keep left', 'keep right', 'no overtaking', 'no overtaking -trucks-', 'no traffic both ways', 'no trucks', 'priority at next intersection', 'priority road', 'restriction ends', 'restriction ends -overtaking -trucks--', 'restriction ends -overtaking-', 'restriction ends 80', 'road narrows', 'roundabout', 'slippery road', 'speed limit 100', 'speed limit 120', 'speed limit 20', 'speed limit 30', 'speed limit 50', 'speed limit 60', 'speed limit 70', 'speed limit 80', 'traffic signal', 'uneven road']
```
### Number of Images
```json
{'test': 54, 'valid': 108, 'train': 383}
```
### How to Use
- Install [datasets](https://pypi.org/project/datasets/):
```bash
pip install datasets
```
- Load the dataset:
```python
from datasets import load_dataset
ds = load_dataset("keremberke/german-traffic-sign-detection", name="full")
example = ds['train'][0]
```
### Roboflow Dataset Page
[https://universe.roboflow.com/mohamed-traore-2ekkp/gtsdb---german-traffic-sign-detection-benchmark/dataset/1](https://universe.roboflow.com/mohamed-traore-2ekkp/gtsdb---german-traffic-sign-detection-benchmark/dataset/1?ref=roboflow2huggingface)
### Citation
```
@misc{ gtsdb---german-traffic-sign-detection-benchmark_dataset,
title = { GTSDB - German Traffic Sign Detection Benchmark Dataset },
type = { Open Source Dataset },
author = { Mohamed Traore },
howpublished = { \\url{ https://universe.roboflow.com/mohamed-traore-2ekkp/gtsdb---german-traffic-sign-detection-benchmark } },
url = { https://universe.roboflow.com/mohamed-traore-2ekkp/gtsdb---german-traffic-sign-detection-benchmark },
journal = { Roboflow Universe },
publisher = { Roboflow },
year = { 2022 },
month = { jul },
note = { visited on 2023-01-16 },
}
```
### License
CC BY 4.0
### Dataset Summary
This dataset was exported via roboflow.com on January 16, 2023 at 9:04 PM GMT
Roboflow is an end-to-end computer vision platform that helps you
* collaborate with your team on computer vision projects
* collect & organize images
* understand and search unstructured image data
* annotate, and create datasets
* export, train, and deploy computer vision models
* use active learning to improve your dataset over time
For state of the art Computer Vision training notebooks you can use with this dataset,
visit https://github.com/roboflow/notebooks
To find over 100k other datasets and pre-trained models, visit https://universe.roboflow.com
The dataset includes 545 images.
Signs are annotated in COCO format.
The following pre-processing was applied to each image:
* Auto-orientation of pixel data (with EXIF-orientation stripping)
No image augmentation techniques were applied.
|
tomekkorbak/pile-detoxify | 2023-02-07T15:31:11.000Z | [
"task_categories:text-classification",
"task_categories:other",
"task_ids:acceptability-classification",
"task_ids:hate-speech-detection",
"task_ids:text-scoring",
"annotations_creators:machine-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"sourc... | tomekkorbak | null | null | null | 0 | 17 | ---
annotations_creators:
- machine-generated
language:
- en
language_creators:
- found
license:
- mit
multilinguality:
- monolingual
pretty_name: pile-detoxify
size_categories:
- 1M<n<10M
source_datasets:
- extended|the_pile
tags:
- toxicity
- pretraining-with-human-feedback
task_categories:
- text-classification
- other
task_ids:
- acceptability-classification
- hate-speech-detection
- text-scoring
---
# Dataset Card for pile-pii-scrubadub
## Dataset Description
- **Repository: https://github.com/tomekkorbak/aligned-pretraining-objectives**
- **Paper: Arxiv link to be added**
### Dataset Summary
This dataset contains text from [The Pile](https://huggingface.co/datasets/the_pile), annotated based on the toxicity of each sentence.
Each document (row in the dataset) is segmented into sentences, and each sentence is given a score: the toxicity predicted by the [Detoxify](https://github.com/unitaryai/detoxify).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
This dataset is taken from [The Pile](https://huggingface.co/datasets/the_pile), which is English text.
## Dataset Structure
### Data Instances
1949977
### Data Fields
- texts (sequence): a list of the sentences in the document, segmented using SpaCy
- meta (dict): the section of [The Pile](https://huggingface.co/datasets/the_pile) from which it originated
- scores (sequence): a score for each sentence in the `texts` column indicating the toxicity predicted by [Detoxify](https://github.com/unitaryai/detoxify)
- avg_score (float64): the average of the scores listed in the `scores` column
- num_sents (int64): the number of sentences (and scores) in that document
### Data Splits
Training set only
## Dataset Creation
### Curation Rationale
This is labeled text from [The Pile](https://huggingface.co/datasets/the_pile), a large dataset of text in English. The text is scored for toxicity so that generative language models can be trained to avoid generating toxic text.
### Source Data
#### Initial Data Collection and Normalization
This is labeled text from [The Pile](https://huggingface.co/datasets/the_pile).
#### Who are the source language producers?
Please see [The Pile](https://huggingface.co/datasets/the_pile) for the source of the dataset.
### Annotations
#### Annotation process
Each sentence was scored using [Detoxify](https://github.com/unitaryai/detoxify), which is a toxic comment classifier.
We used the `unbiased` model which is based on the 124M parameter [RoBERTa](https://arxiv.org/abs/1907.11692) and trained on the [Jigsaw Unintended Bias in Toxicity Classification dataset](https://www.kaggle.com/c/jigsaw-unintended-bias-in-toxicity-classification).
#### Who are the annotators?
[Detoxify](https://github.com/unitaryai/detoxify)
### Personal and Sensitive Information
This dataset contains all personal identifable information and toxic text that was originally contained in [The Pile](https://huggingface.co/datasets/the_pile).
## Considerations for Using the Data
### Social Impact of Dataset
This dataset contains examples of toxic text and personal identifiable information.
(A version of this datatset with personal identifiable information annotated is [available here](https://huggingface.co/datasets/tomekkorbak/pile-pii-scrubadub).)
Please take care to avoid misusing the toxic text or putting anybody in danger by publicizing their information.
This dataset is intended for research purposes only. We cannot guarantee that all toxic text has been detected, and we cannot guarantee that models trained using it will avoid generating toxic text.
We do not recommend deploying models trained on this data.
### Discussion of Biases
This dataset contains all biases from The Pile discussed in their paper: https://arxiv.org/abs/2101.00027
### Other Known Limitations
The toxic text in this dataset was detected using imperfect automated detection methods. We cannot guarantee that the labels are 100% accurate.
## Additional Information
### Dataset Curators
[The Pile](https://huggingface.co/datasets/the_pile)
### Licensing Information
From [The Pile](https://huggingface.co/datasets/the_pile): PubMed Central: [MIT License](https://github.com/EleutherAI/pile-pubmedcentral/blob/master/LICENSE)
### Citation Information
Paper information to be added
### Contributions
[The Pile](https://huggingface.co/datasets/the_pile) |
datablations/c4-filter | 2023-02-01T10:29:51.000Z | [
"region:us"
] | datablations | null | null | null | 0 | 17 | ---
dataset_info:
features:
- name: text
dtype: string
- name: timestamp
dtype: string
- name: url
dtype: string
- name: perplexity_score
dtype: float64
- name: text_length
dtype: int64
- name: domain
dtype: 'null'
- name: dup_ratio
dtype: float64
- name: pairs
sequence:
sequence: int64
- name: repetitions
sequence: binary
- name: included_in_dedup
dtype: bool
- name: cluster
sequence: int64
splits:
- name: train
num_bytes: 959334093604
num_examples: 364868892
download_size: 586254318285
dataset_size: 959334093604
---
# Dataset Card for "c4-dedup"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ml4pubmed/pubmed-classification-20k | 2023-02-17T06:31:13.000Z | [
"task_categories:text-classification",
"size_categories:10K<n<100K",
"language:en",
"license:apache-2.0",
"pubmed",
"region:us"
] | ml4pubmed | null | null | null | 0 | 17 | ---
license: apache-2.0
task_categories:
- text-classification
language:
- en
tags:
- pubmed
size_categories:
- 10K<n<100K
---
# ml4pubmed/pubmed-classification-20k
- 20k subset of pubmed text classification from course |
maximedb/sick_nl | 2023-04-25T10:19:43.000Z | [
"task_categories:text-classification",
"size_categories:1K<n<10K",
"language:nl",
"license:mit",
"region:us"
] | maximedb | null | null | null | 0 | 17 | ---
dataset_info:
features:
- name: pair_ID
dtype: int64
- name: sentence_A
dtype: string
- name: sentence_B
dtype: string
- name: entailment_label
dtype: string
- name: relatedness_score
dtype: float64
- name: entailment_AB
dtype: string
- name: entailment_BA
dtype: string
- name: sentence_A_original
dtype: string
- name: sentence_B_original
dtype: string
- name: sentence_A_dataset
dtype: string
- name: sentence_B_dataset
dtype: string
- name: SemEval_set
dtype: string
- name: label
dtype: int64
- name: label_seq2seq
dtype: string
splits:
- name: train
num_bytes: 1359887
num_examples: 4439
- name: validation
num_bytes: 153417
num_examples: 495
- name: test
num_bytes: 1496660
num_examples: 4906
download_size: 822658
dataset_size: 3009964
license: mit
task_categories:
- text-classification
language:
- nl
pretty_name: SICK-NL
size_categories:
- 1K<n<10K
---
## Dataset Description
- **Homepage:** https://github.com/gijswijnholds/sick_nl
- **Repository:** https://github.com/gijswijnholds/sick_nl
- **Paper:** https://aclanthology.org/2021.eacl-main.126/
- **Point of Contact:** [Gijs Wijnholds](mailto:gijswijnholds@gmail.com)
### Dataset Summary
An automatically translated, manually corrected translation of the SICK dataset of [Marelli et al. 2014](https://www.aclweb.org/anthology/L14-1314), intended to boost research in Dutch NLP.
### Languages
The dataset is in Dutch.
## Dataset Structure
### Data Fields
- pair_ID: sentence pair ID
- sentence_A: sentence A
- sentence_B: sentence B
- label: textual entailment gold label: entailment (0), neutral (1) or contradiction (2)
- relatedness_score: semantic relatedness gold score (on a 1-5 continuous scale)
- entailment_AB: entailment for the A-B order (A_neutral_B, A_entails_B, or A_contradicts_B)
- entailment_BA: entailment for the B-A order (B_neutral_A, B_entails_A, or B_contradicts_A)
- sentence_A_original: original sentence from which sentence A is derived
- sentence_B_original: original sentence from which sentence B is derived
- sentence_A_dataset: dataset from which the original sentence A was extracted (FLICKR vs. SEMEVAL)
- sentence_B_dataset: dataset from which the original sentence B was extracted (FLICKR vs. SEMEVAL)
### Data Splits
Train Trial Test
4439 495 4906
## Dataset Creation
The dataset was created by first automatically translating all sentences, then by manually correcting any translation errors. This guarantees naturality of the examples while aligning the relatedness scores and entailment labels. Since the data IDs are preserved the dataset is fully aligned on the sentence level.
## Additional Information
### Licensing Information
This dataset falls under an MIT License.
### Citation Information
```
@inproceedings{wijnholds-etal-2021-sicknl,
title = "SICK-NL: A Dataset for Dutch Natural Language Inference",
author = "Wijnholds, Gijs and Moortgat, Michael",
booktitle = "Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics",
month = apr,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2021.eacl-main.126/",
}
```
### Contributions
Thanks to [@maximedb](https://huggingface.co/maximedb) for adding this dataset. |
ronig/pdb_sequences | 2023-06-24T18:33:17.000Z | [
"license:pddl",
"region:us"
] | ronig | null | null | null | 0 | 17 | ---
license: pddl
---
# PDB Sequences
This dataset contains 780,163 protein sequences from the [RCCB Protein Data Bank](https://www.rcsb.org/) |
HuggingFaceH4/instruction-dataset | 2023-02-28T22:30:11.000Z | [
"license:apache-2.0",
"region:us"
] | HuggingFaceH4 | null | null | null | 14 | 17 | ---
license: apache-2.0
---
This is the blind eval dataset of high-quality, diverse, human-written instructions with demonstrations. We will be using this for step 3 evaluations in our RLHF pipeline. |
gabeorlanski/tp3 | 2023-07-18T16:22:25.000Z | [
"task_categories:text-generation",
"task_categories:text2text-generation",
"task_categories:translation",
"size_categories:1K<n<10K",
"source_datasets:original",
"source_datasets:extended|p3",
"language:en",
"license:apache-2.0",
"code",
"arxiv:2302.01973",
"arxiv:2106.05784",
"region:us"
] | gabeorlanski | Translating Python Programming Puzzles (TP3) is a code translation benchmark created from the verification functions from the questions in the original Python Programming Puzzles dataset (Schuster et al., 2021) to create this dataset. These functions are hand-crafted by the authors and are used to check if an answer satisfies the constraints of the puzzle. These puzzles range in difficulty from basic character checking to competitive programming problems. Thus, each verification function is written by an expert python programmer and requires a significant understanding of programming to translate. In total, there are 370 python functions to translate. | @article{orlanski2023measuring,
title={Measuring The Impact Of Programming Language Distribution},
author={Orlanski, Gabriel and Xiao, Kefan and Garcia, Xavier and Hui, Jeffrey and Howland, Joshua and Malmaud, Jonathan and Austin, Jacob and Singh, Rishah and Catasta, Michele},
journal={arXiv preprint arXiv:2302.01973},
year={2023}
}
@inproceedings{
schuster2021programming,
title={Programming Puzzles},
author={Tal Schuster and Ashwin Kalyan and Alex Polozov and Adam Tauman Kalai},
booktitle={Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2021},
url={https://arxiv.org/abs/2106.05784}
} | null | 0 | 17 | ---
license: apache-2.0
task_categories:
- text-generation
- text2text-generation
- translation
language:
- en
tags:
- code
pretty_name: BabelCode TP3
size_categories:
- 1K<n<10K
source_datasets:
- original
- extended|p3
---
# Dataset Card for Translating Python Programming Puzzles (TP3)
## Dataset Description
- **Repository:** [GitHub Repository](https://github.com/google-research/babelcode)
- **Paper:** [Measuring The Impact Of Programming Language Distribution](https://arxiv.org/abs/2302.01973)
### How To Use This Dataset
To use this dataset, you can either use the original [BabelCode Repo](https://github.com/google-research/babelcode), or you can use the [`bc_eval` Metric](https://huggingface.co/spaces/gabeorlanski/bc_eval).
### Dataset Summary
The Translating Python Programming Puzzles (TP3) dataset is created from the verification functions in the [Python Programming Puzzles dataset (Schuster et al., 2021)](https://github.com/microsoft/PythonProgrammingPuzzles) to create this dataset. These functions are hand-crafted by the
authors and are used to check if an answer satisfies the constraints of the puzzle. These puzzles range in difficulty from basic character checking to competitive programming problems.
### Supported Tasks and Leaderboards
### Languages
BC-TP3 supports:
* C++
* C#
* Dart
* Go
* Haskell
* Java
* Javascript
* Julia
* Kotlin
* Lua
* PHP
* R
* Rust
* Scala
* TypeScript
## Dataset Structure
```python
>>> from datasets import load_dataset
>>> load_dataset("gabeorlanski/tp3")
DatasetDict({
test: Dataset({
features: ['qid', 'title', 'language', 'text', 'signature_with_docstring', 'signature', 'arguments', 'source', 'question_info'],
num_rows: 5920
})
})
```
### Data Fields
- `qid`: The question ID used for running tests.
- `title`: The title of the question.
- `language`: The programming language of the example.
- `text`: The description of the problem.
- `signature`: The signature for the problem.
- `signature_with_docstring`: The signature with the adequately formatted docstring for the given problem.
- `arguments`: The arguments of the problem.
- `source`: The source solution in Python.
- `question_info`: The dict of information used for executing predictions. It has the keys:
- `test_code`: The raw testing script used in the language. If you want to use this, replace `PLACEHOLDER_FN_NAME` (and `PLACEHOLDER_CLS_NAME` if needed) with the corresponding entry points. Next, replace `PLACEHOLDER_CODE_BODY` with the postprocessed prediction.
- `test_list`: The raw json line of the list of tests for the problem. To load them, use `json.loads`
- `test_case_ids`: The list of test case ids for the problem. These are used to determine if a prediction passes or not.
- `entry_fn_name`: The function's name to use an entry point.
- `entry_cls_name`: The class name to use an entry point.
- `commands`: The commands used to execute the prediction. Includes a `__FILENAME__` hole that is replaced with the filename.
- `timeouts`: The default timeouts for each command.
- `extension`: The extension for the prediction file.
**NOTE:** If you want to use a different function name (or class name for languages that require class names) for the prediction, you must update the `entry_fn_name` and `entry_cls_name` accordingly. For example, if you have the original question with `entry_fn_name` of `add`, but want to change it to `f`, you must update `ds["question_info"]["entry_fn_name"]` to `f`:
```python
>>> from datasets import load_dataset
>>> ds = load_dataset("gabeorlanski/bc-mbpp")['test']
>>> # The original entry_fn_name
>>> ds[0]['question_info']['entry_fn_name']
removeOcc
>>> # You MUST update the corresponding entry_fn_name
>>> ds[0]['question_info']['entry_fn_name'] = 'f'
>>> ds[0]['question_info']['entry_fn_name']
f
```
## Dataset Creation
See section 2 and section 4.4 of the [BabelCode Paper](https://arxiv.org/abs/2302.01973) to learn more about how the datasets are translated.
For information on how the original P3 dataset was collected, please see [Programming Puzzles paper](https://arxiv.org/abs/2106.05784).
### Dataset Curators
Google Research
### Licensing Information
CC-BY-4.0
### Citation Information
```
@article{orlanski2023measuring,
title={Measuring The Impact Of Programming Language Distribution},
author={Orlanski, Gabriel and Xiao, Kefan and Garcia, Xavier and Hui, Jeffrey and Howland, Joshua and Malmaud, Jonathan and Austin, Jacob and Singh, Rishah and Catasta, Michele},
journal={arXiv preprint arXiv:2302.01973},
year={2023}
}
@inproceedings{
schuster2021programming,
title={Programming Puzzles},
author={Tal Schuster and Ashwin Kalyan and Alex Polozov and Adam Tauman Kalai},
booktitle={Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2021},
url={https://arxiv.org/abs/2106.05784}
}
``` |
rcds/swiss_law_area_prediction | 2023-07-20T07:38:52.000Z | [
"task_categories:text-classification",
"annotations_creators:machine-generated",
"language_creators:expert-generated",
"multilinguality:multilingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:de",
"language:fr",
"language:it",
"license:cc-by-sa-4.0",
"arxiv:2306.09237",... | rcds | This dataset contains court decision for law area prediction task. | @InProceedings{huggingface:dataset,
title = {A great new dataset},
author={huggingface, Inc.
},
year={2020}
} | null | 2 | 17 | ---
license: cc-by-sa-4.0
annotations_creators:
- machine-generated
language:
- de
- fr
- it
language_creators:
- expert-generated
multilinguality:
- multilingual
pretty_name: Law Area Prediction
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- text-classification
---
# Dataset Card for Law Area Prediction
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
The dataset contains cases to be classified into the four main areas of law: Public, Civil, Criminal and Social
These can be classified further into sub-areas:
```
"public": ['Tax', 'Urban Planning and Environmental', 'Expropriation', 'Public Administration', 'Other Fiscal'],
"civil": ['Rental and Lease', 'Employment Contract', 'Bankruptcy', 'Family', 'Competition and Antitrust', 'Intellectual Property'],
'criminal': ['Substantive Criminal', 'Criminal Procedure']
```
### Supported Tasks and Leaderboards
Law Area Prediction can be used as text classification task
### Languages
Switzerland has four official languages with three languages German, French and Italian being represenated. The decisions are written by the judges and clerks in the language of the proceedings.
| Language | Subset | Number of Documents|
|------------|------------|--------------------|
| German | **de** | 127K |
| French | **fr** | 156K |
| Italian | **it** | 46K |
## Dataset Structure
- decision_id: unique identifier for the decision
- facts: facts section of the decision
- considerations: considerations section of the decision
- law_area: label of the decision (main area of law)
- law_sub_area: sub area of law of the decision
- language: language of the decision
- year: year of the decision
- court: court of the decision
- chamber: chamber of the decision
- canton: canton of the decision
- region: region of the decision
### Data Fields
[More Information Needed]
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
The dataset was split date-stratisfied
- Train: 2002-2015
- Validation: 2016-2017
- Test: 2018-2022
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
The original data are published from the Swiss Federal Supreme Court (https://www.bger.ch) in unprocessed formats (HTML). The documents were downloaded from the Entscheidsuche portal (https://entscheidsuche.ch) in HTML.
#### Who are the source language producers?
The decisions are written by the judges and clerks in the language of the proceedings.
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
The dataset contains publicly available court decisions from the Swiss Federal Supreme Court. Personal or sensitive information has been anonymized by the court before publication according to the following guidelines: https://www.bger.ch/home/juridiction/anonymisierungsregeln.html.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
We release the data under CC-BY-4.0 which complies with the court licensing (https://www.bger.ch/files/live/sites/bger/files/pdf/de/urteilsveroeffentlichung_d.pdf)
© Swiss Federal Supreme Court, 2002-2022
The copyright for the editorial content of this website and the consolidated texts, which is owned by the Swiss Federal Supreme Court, is licensed under the Creative Commons Attribution 4.0 International licence. This means that you can re-use the content provided you acknowledge the source and indicate any changes you have made.
Source: https://www.bger.ch/files/live/sites/bger/files/pdf/de/urteilsveroeffentlichung_d.pdf
### Citation Information
Please cite our [ArXiv-Preprint](https://arxiv.org/abs/2306.09237)
```
@misc{rasiah2023scale,
title={SCALE: Scaling up the Complexity for Advanced Language Model Evaluation},
author={Vishvaksenan Rasiah and Ronja Stern and Veton Matoshi and Matthias Stürmer and Ilias Chalkidis and Daniel E. Ho and Joel Niklaus},
year={2023},
eprint={2306.09237},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Contributions
|
shibing624/AdvertiseGen | 2023-05-12T07:25:00.000Z | [
"task_categories:text-generation",
"language:zh",
"license:cc-by-4.0",
"text-generation",
"e-commerce advertise",
"region:us"
] | shibing624 | null | null | null | 13 | 17 | ---
license: cc-by-4.0
language:
- zh
tags:
- text-generation
- e-commerce advertise
pretty_name: AdvertiseGen
task_categories:
- text-generation
---
# Dataset Card for AdvertiseGen
- **formal url:** https://www.luge.ai/#/luge/dataDetail?id=9
## Dataset Description
数据集介绍
AdvertiseGen是电商广告文案生成数据集。
AdvertiseGen以商品网页的标签与文案的信息对应关系为基础构造,是典型的开放式生成任务,在模型基于key-value输入生成开放式文案时,与输入信息的事实一致性需要得到重点关注。
- 任务描述:给定商品信息的关键词和属性列表kv-list,生成适合该商品的广告文案adv;
- 数据规模:训练集114k,验证集1k,测试集3k;
- 数据来源:清华大学CoAI小组;
### Supported Tasks and Leaderboards
The dataset designed for generate e-commerce advertise.
### Languages
The data in AdvertiseGen are in Chinese.
## Dataset Structure
### Data Instances
An example of "train" looks as follows:
```json
{
"content": "类型#上衣*材质#牛仔布*颜色#白色*风格#简约*图案#刺绣*衣样式#外套*衣款式#破洞",
"summary": "简约而不简单的牛仔外套,白色的衣身十分百搭。衣身多处有做旧破洞设计,打破单调乏味,增加一丝造型看点。衣身后背处有趣味刺绣装饰,丰富层次感,彰显别样时尚。"
}
```
### Citation Information
数据集引用
如在学术论文中使用本数据集,请添加相关引用说明,具体如下:
```
Shao, Zhihong, et al. "Long and Diverse Text Generation with Planning-based Hierarchical Variational Model." Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). 2019.
```
|
shibing624/CSC | 2023-05-12T07:30:59.000Z | [
"task_categories:text-generation",
"language:zh",
"license:apache-2.0",
"text-correction",
"region:us"
] | shibing624 | null | null | null | 17 | 17 | ---
license: apache-2.0
language:
- zh
tags:
- text-correction
pretty_name: CSC
task_categories:
- text-generation
---
# Dataset Card for CSC
中文拼写纠错数据集
- **Repository:** https://github.com/shibing624/pycorrector
## Dataset Description
Chinese Spelling Correction (CSC) is a task to detect and correct misspelled characters in Chinese texts.
CSC is challenging since many Chinese characters are visually or phonologically similar but with quite different semantic meanings.
中文拼写纠错数据集,共27万条,是通过原始SIGHAN13、14、15年数据集和Wang271k数据集合并整理后得到,json格式,带错误字符位置信息。
### Original Dataset Summary
- test.json 和 dev.json 为 **SIGHAN数据集**, 包括SIGHAN13 14 15,来自 [官方csc.html](http://nlp.ee.ncu.edu.tw/resource/csc.html) ,文件大小:339kb,4千条。
- train.json 为 **Wang271k数据集**,包括 Wang271k ,来自 [Automatic-Corpus-Generation dimmywang提供](https://github.com/wdimmy/Automatic-Corpus-Generation/blob/master/corpus/train.sgml) ,文件大小:93MB,27万条。
如果只想用SIGHAN数据集,可以这样取数据:
```python
from datasets import load_dataset
dev_ds = load_dataset('shibing624/CSC', split='validation')
print(dev_ds)
print(dev_ds[0])
test_ds = load_dataset('shibing624/CSC', split='test')
print(test_ds)
print(test_ds[0])
```
### Supported Tasks and Leaderboards
中文拼写纠错任务
The dataset designed for csc task training pretrained language models.
### Languages
The data in CSC are in Chinese.
## Dataset Structure
### Data Instances
An example of "train" looks as follows:
```json
{
"id": "B2-4029-3",
"original_text": "晚间会听到嗓音,白天的时候大家都不会太在意,但是在睡觉的时候这嗓音成为大家的恶梦。",
"wrong_ids": [
5,
31
],
"correct_text": "晚间会听到噪音,白天的时候大家都不会太在意,但是在睡觉的时候这噪音成为大家的恶梦。"
}
```
### Data Fields
字段解释:
- id:唯一标识符,无意义
- original_text: 原始错误文本
- wrong_ids: 错误字的位置,从0开始
- correct_text: 纠正后的文本
### Data Splits
| | train | dev | test |
|---------------|------:|--:|--:|
| CSC | 251835条 | 27981条 | 1100条 |
### Licensing Information
The dataset is available under the Apache 2.0.
### Citation Information
```latex
@misc{Xu_Pycorrector_Text_error,
title={Pycorrector: Text error correction tool},
author={Xu Ming},
year={2021},
howpublished={\url{https://github.com/shibing624/pycorrector}},
}
```
### Contributions
[shibing624](https://github.com/shibing624) 整理并上传 |
TimoImhof/TriviaQA-in-SQuAD-format | 2023-04-01T13:43:14.000Z | [
"region:us"
] | TimoImhof | null | null | null | 0 | 17 | ---
dataset_info:
features:
- name: id
dtype: string
- name: question
dtype: string
- name: context
dtype: string
- name: answers
struct:
- name: answer_start
sequence: int64
- name: text
sequence: string
splits:
- name: unmodified
num_bytes: 22886661
num_examples: 15368
- name: modified_30_percent
num_bytes: 22899894
num_examples: 15368
- name: modified_100_percent
num_bytes: 22929228
num_examples: 15368
download_size: 40760032
dataset_size: 68715783
---
# Dataset Card for "TriviaQA-in-SQuAD-format"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
pszemraj/scientific_lay_summarisation-elife-norm | 2023-04-06T23:34:11.000Z | [
"task_categories:summarization",
"task_categories:text2text-generation",
"size_categories:10K<n<100K",
"source_datasets:tomasg25/scientific_lay_summarisation",
"language:en",
"license:mit",
"region:us"
] | pszemraj | null | null | null | 3 | 17 | ---
license: mit
task_categories:
- summarization
- text2text-generation
language:
- en
size_categories:
- 10K<n<100K
source_datasets: tomasg25/scientific_lay_summarisation
---
# scientific_lay_summarisation - elife - normalized
This is the "_elife_" split. For more words, refer to the [PLOS split README](https://huggingface.co/datasets/pszemraj/scientific_lay_summarisation-plos-norm)
## Contents
load with datasets:
```python
from datasets import load_dataset
# If the dataset is gated/private, make sure you have run huggingface-cli login
dataset = load_dataset("pszemraj/scientific_lay_summarisation-elife-norm")
dataset
```
Output:
```python
DatasetDict({
train: Dataset({
features: ['article', 'summary', 'section_headings', 'keywords', 'year', 'title', 'article_length', 'summary_length'],
num_rows: 4346
})
test: Dataset({
features: ['article', 'summary', 'section_headings', 'keywords', 'year', 'title', 'article_length', 'summary_length'],
num_rows: 241
})
validation: Dataset({
features: ['article', 'summary', 'section_headings', 'keywords', 'year', 'title', 'article_length', 'summary_length'],
num_rows: 241
})
})
```
## Lengths
Train set:

|
Nebulous/gpt4all_pruned | 2023-04-03T23:29:29.000Z | [
"license:cc",
"region:us"
] | Nebulous | null | null | null | 14 | 17 | ---
license: cc
---
Pruned gpt4all dataset meant to reduce annoying behvaiors and nonsensical prompts |
pain/Arabic-Tweets | 2023-04-08T10:02:07.000Z | [
"language:ar",
"license:cc-by-4.0",
"region:us"
] | pain | null | null | null | 7 | 17 | ---
license: cc-by-4.0
language:
- ar
---
# Dataset Card for Dataset Arabic-Tweets
## Dataset Description
- **Homepage:** https://ieee-dataport.org/open-access/masc-massive-arabic-speech-corpus
- **Paper:** https://ieeexplore.ieee.org/document/10022652
### Dataset Summary
This dataset has been collected from twitter which is more than 41 GB of clean data of Arabic Tweets with nearly 4-billion Arabic words (12-million unique Arabic words).
### Languages
Arabic
### Source Data
Twitter
### Example on data loading using streaming:
```py
from datasets import load_dataset
dataset = load_dataset("pain/Arabic-Tweets",split='train', streaming=True)
print(next(iter(dataset)))
```
### Example on data loading without streaming "It will be downloaded locally":
```py
from datasets import load_dataset
dataset = load_dataset("pain/Arabic-Tweets",split='train')
print(dataset["train"][0])
```
#### Initial Data Collection and Normalization
The collected data comprises 100 GB of Twitter raw data. Only tweets with Arabic characters were crawled. It was observed that the new data contained a large number of Persian tweets as well as many Arabic words with repeated characters. Because of this and in order to improve the data efficiency the raw data was processed as follows: hashtags, mentions, and links were removed; tweets that contain Persian characters, 3 consecutive characters, or a singlecharacter word were dropped out; normalization of Arabic letters was considered.
This has resulted in more than 41 GB of clean data with nearly 4-billion Arabic words (12-million unique Arabic words).
## Considerations for Using the Data
- This data has been collected to create a language model. The tweets published without checking the tweets data. Therefore, we are not responsible for any tweets content at all.
### Licensing Information
[Creative Commons Attribution](https://creativecommons.org/licenses/by/4.0/)
### Citation Information
```
@INPROCEEDINGS{10022652,
author={Al-Fetyani, Mohammad and Al-Barham, Muhammad and Abandah, Gheith and Alsharkawi, Adham and Dawas, Maha},
booktitle={2022 IEEE Spoken Language Technology Workshop (SLT)},
title={MASC: Massive Arabic Speech Corpus},
year={2023},
volume={},
number={},
pages={1006-1013},
doi={10.1109/SLT54892.2023.10022652}}
``` |
WxWx/ChatGPT-Detector-Bias | 2023-04-10T00:48:06.000Z | [
"task_categories:text-classification",
"size_categories:n<1K",
"language:en",
"license:mit",
"ChatGPT",
"GPT Detector",
"ChatGPT Detector",
"arxiv:2304.02819",
"region:us"
] | WxWx | The data folders contain the human-written and AI-generated datasets used in our study. Each subfolder contains a name.json file, which provides the metadata, and a data.json file, which contains the text samples. | @article{liang2023gpt,
title={GPT detectors are biased against non-native English writers},
author={Weixin Liang and Mert Yuksekgonul and Yining Mao and Eric Wu and James Zou},
year={2023},
eprint={2304.02819},
archivePrefix={arXiv},
primaryClass={cs.CL}
} | null | 7 | 17 | ---
license: mit
task_categories:
- text-classification
language:
- en
tags:
- ChatGPT
- GPT Detector
- ChatGPT Detector
size_categories:
- n<1K
---
# GPT Detectors Are Biased Against Non-Native English Writers
[](https://lbesson.mit-license.org/)
[](https://www.python.org/downloads/release/python-390/)
[](https://jupyter.org/try)
This repository contains the data and supplementary materials for our paper:
**GPT Detectors Are Biased Against Non-Native English Writers**\
Weixin Liang*, Mert Yuksekgonul*, Yining Mao*, Eric Wu*, James Zou\
arXiv: [2304.02819](https://arxiv.org/abs/2304.02819)
```bibtex
@article{liang2023gpt,
title={GPT detectors are biased against non-native English writers},
author={Weixin Liang and Mert Yuksekgonul and Yining Mao and Eric Wu and James Zou},
year={2023},
eprint={2304.02819},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
## Abstract
*The rapid adoption of generative language models has brought about substantial advancements in digital communication, while simultaneously raising concerns regarding the potential misuse of AI-generated content. Although numerous detection methods have been proposed to differentiate between AI and human-generated content, the fairness and robustness of these detectors remain underexplored. In this study, we evaluate the performance of several widely-used GPT detectors using writing samples from native and non-native English writers. Our findings reveal that these detectors consistently misclassify non-native English writing samples as AI-generated, whereas native writing samples are accurately identified. Furthermore, we demonstrate that simple prompting strategies can not only mitigate this bias but also effectively bypass GPT detectors, suggesting that GPT detectors may unintentionally penalize writers with constrained linguistic expressions. Our results call for a broader conversation about the ethical implications of deploying ChatGPT content detectors and caution against their use in evaluative or educational settings, particularly when they may inadvertently penalize or exclude non-native English speakers from the global discourse.*
<p align='center'>
<img width="636" src="https://user-images.githubusercontent.com/32794044/230640445-8d1221d4-8651-4cf4-b6d7-b6d440d6e0f5.png">
<br>
<b>Figure 1: Bias in GPT detectors against non-native English writing samples.</b>
</p>
(a) Performance comparison of seven widely-used GPT detectors. More than half of the non-native-authored TOEFL (Test of English as a Foreign Language) essays are incorrectly classified as "AI-generated," while detectors exhibit near-perfect accuracy for college essays.
Using ChatGPT-4 to improve the word choices in TOEFL essays (Prompt: "Enhance the word choices to sound more like that of a native speaker.") significantly reduces misclassification as AI-generated text.
(b) TOEFL essays unanimously misclassified as AI-generated show significantly lower perplexity compared to others, suggesting that GPT detectors might penalize authors with limited linguistic expressions.
<p align='center'>
<img width="100%" src="https://user-images.githubusercontent.com/32794044/230640270-e6c3d0ca-aabd-4d13-8527-15fed1491050.png">
<br>
<b>Figure 2: Simple prompts effectively bypass GPT detectors.</b>
</p>
(a) For ChatGPT-3.5 generated college admission essays, the performance of seven widely-used GPT detectors declines markedly when a second-round self-edit prompt ("Elevate the provided text by employing literary language") is applied, with detection rates dropping from up to 100% to up to 13%.
(b) ChatGPT-3.5 generated essays initially exhibit notably low perplexity; however, applying the self-edit prompt leads to a significant increase in perplexity.
(c) Similarly, in detecting ChatGPT-3.5 generated scientific abstracts, a second-round self-edit prompt ("Elevate the provided text by employing advanced technical language") leads to a reduction in detection rates from up to 68% to up to 28%.
(d) ChatGPT-3.5 generated abstracts have slightly higher perplexity than the generated essays but remain low. Again, the self-edit prompt significantly increases the perplexity.
## Repo Structure Overview
```
.
├── README.md
├── data/
├── human_data/
├── TOEFL_real_91/
├── name.json
├── data.json
├── TOEFL_gpt4polished_91/
├── ...
├── CollegeEssay_real_70/
├── CS224N_real_145/
├── gpt_data/
├── CollegeEssay_gpt3_31/
├── CollegeEssay_gpt3PromptEng_31/
├── CS224N_gpt3_145/
├── CS224N_gpt3PromptEng_145/
```
The `data` folder contains the human-written and AI-generated datasets used in our study. Each subfolder contains a `name.json` file, which provides the metadata, and a `data.json` file, which contains the text samples.
## Reference
```bibtex
@article{liang2023gpt,
title={GPT detectors are biased against non-native English writers},
author={Weixin Liang and Mert Yuksekgonul and Yining Mao and Eric Wu and James Zou},
year={2023},
eprint={2304.02819},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
J4YL19/biored_tokenized | 2023-04-06T22:33:57.000Z | [
"region:us"
] | J4YL19 | null | null | null | 0 | 17 | ---
dataset_info:
features:
- name: pmid
dtype: string
- name: passage
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence: string
splits:
- name: train
num_bytes: 2259680
num_examples: 387
- name: val
num_bytes: 604670
num_examples: 98
- name: test
num_bytes: 576610
num_examples: 97
download_size: 1083246
dataset_size: 3440960
---
# Dataset Card for "biored_tokenized"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
cvssp/WavCaps | 2023-07-06T13:28:10.000Z | [
"size_categories:100B<n<1T",
"language:en",
"license:cc-by-4.0",
"arxiv:2303.17395",
"region:us"
] | cvssp | null | null | null | 14 | 17 | ---
license: cc-by-4.0
language:
- en
size_categories:
- 100B<n<1T
---
# WavCaps
WavCaps is a ChatGPT-assisted weakly-labelled audio captioning dataset for audio-language multimodal research, where the audio clips are sourced from three websites ([FreeSound](https://freesound.org/), [BBC Sound Effects](https://sound-effects.bbcrewind.co.uk/), and [SoundBible](https://soundbible.com/)) and a sound event detection dataset ([AudioSet Strongly-labelled Subset](https://research.google.com/audioset/download_strong.html)).
- **Paper:** https://arxiv.org/abs/2303.17395
- **Github:** https://github.com/XinhaoMei/WavCaps
## Statistics
| Data Source | # audio | avg. audio duration (s) | avg. text length |
|--------------------|----------|-------------------------|------------------|
| FreeSound | 262300 | 85.98 | 6.77 |
| BBC Sound Effects | 31201 | 115.04 | 9.67 |
| SoundBible | 1232 | 13.12 | 5.87 |
| AudioSet SL subset | 108317 | 10.00 | 9.79 |
| WavCaps | 403050 | 67.59 | 7.80 |
## Download
We provide a json file for each data source. For audio clips sourced from websites, we provide processed caption, raw description, as well as other metadata. For audio clips from AudioSet, we use the version from PANNs, where each file name is appended with a 'Y' at the start. For the start time, please refer to the original metadata of AudioSet SL subset.
Waveforms with flac format can be downloaded through [Zip_files](https://huggingface.co/datasets/cvssp/WavCaps/tree/main/Zip_files) directory.
Pretrained models can be downloaded [here](https://drive.google.com/drive/folders/1pFr8IRY3E1FAtc2zjYmeuSVY3M5a-Kdj?usp=share_link).
<font color='red'>If you get "error: invalid zip file with overlapped components (possible zip bomb)" when unzipping,
please try the following commands: </font>
`zip -F AudioSet_SL.zip --out AS.zip`
`unzip AS.zip`
## License
Only academic uses are allowed for WavCaps dataset. By downloading audio clips through the links provided in the json files, you agree that you will use the audios for research purposes only.
For credits for audio clips from FreeSound, please refer to its own page.
For detailed license information, please refer to:
[FreeSound](https://freesound.org/help/faq/#licenses), [BBC Sound Effects](https://sound-effects.bbcrewind.co.uk/licensing), [SoundBible](https://soundbible.com/about.php)
The models we provided are created under a UK data copyright exemption for non-commercial research.
## Code for related tasks
We provide codes and pre-trained models for audio-language retrieval, automated audio captioning, and zero-shot audio classification.
* [Retrieval](https://github.com/XinhaoMei/WavCaps/tree/master/retrieval)
* [Captioning](https://github.com/XinhaoMei/WavCaps/tree/master/captioning)
* [Zero-shot Audio Classification](https://github.com/XinhaoMei/WavCaps/blob/master/retrieval/zero_shot_classification.py)
* [Text-to-Sound Generation](https://github.com/haoheliu/AudioLDM)
## Citation
Please cite the following if you make use of the dataset.
```bibtex
@article{mei2023wavcaps,
title={WavCaps: A ChatGPT-Assisted Weakly-Labelled Audio Captioning Dataset for Audio-Language Multimodal Research},
author={Mei, Xinhao and Meng, Chutong and Liu, Haohe and Kong, Qiuqiang and Ko, Tom and Zhao, Chengqi and Plumbley, Mark D and Zou, Yuexian and Wang, Wenwu},
journal={arXiv preprint arXiv:2303.17395},
year={2023}
}
``` |
EdwardLin2023/MELD-Audio | 2023-04-24T04:04:52.000Z | [
"license:cc-by-4.0",
"region:us"
] | EdwardLin2023 | Multimodal EmotionLines Dataset (MELD) has been created by enhancing and extending EmotionLines dataset.
MELD contains the same dialogue instances available in EmotionLines, but it also encompasses audio and
visual modality along with text. MELD has more than 1400 dialogues and 13000 utterances from Friends TV series.
Multiple speakers participated in the dialogues. Each utterance in a dialogue has been labeled by any of these
seven emotions -- Anger, Disgust, Sadness, Joy, Neutral, Surprise and Fear. MELD also has sentiment (positive,
negative and neutral) annotation for each utterance.
This dataset is slightly modified, so that it concentrates on Emotion recognition in audio input only. | @article{poria2018meld,
title={Meld: A multimodal multi-party dataset for emotion recognition in conversations},
author={Poria, Soujanya and Hazarika, Devamanyu and Majumder, Navonil and Naik, Gautam and Cambria, Erik and Mihalcea, Rada},
journal={arXiv preprint arXiv:1810.02508},
year={2018}
}
@article{chen2018emotionlines,
title={Emotionlines: An emotion corpus of multi-party conversations},
author={Chen, Sheng-Yeh and Hsu, Chao-Chun and Kuo, Chuan-Chun and Ku, Lun-Wei and others},
journal={arXiv preprint arXiv:1802.08379},
year={2018}
} | null | 0 | 17 | ---
license: cc-by-4.0
---
|
renumics/speech_commands_enriched | 2023-09-27T12:02:25.000Z | [
"task_categories:audio-classification",
"task_ids:keyword-spotting",
"annotations_creators:other",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"size_categories:10K<n<100K",
"source_datasets:extended|speech_commands",
"language:en",
"license:cc-by-4... | renumics | This is a set of one-second .wav audio files, each containing a single spoken
English word or background noise. These words are from a small set of commands, and are spoken by a
variety of different speakers. This data set is designed to help train simple
machine learning models. This dataset is covered in more detail at
[https://arxiv.org/abs/1804.03209](https://arxiv.org/abs/1804.03209).
Version 0.01 of the data set (configuration `"v0.01"`) was released on August 3rd 2017 and contains
64,727 audio files.
In version 0.01 thirty different words were recoded: "Yes", "No", "Up", "Down", "Left",
"Right", "On", "Off", "Stop", "Go", "Zero", "One", "Two", "Three", "Four", "Five", "Six", "Seven", "Eight", "Nine",
"Bed", "Bird", "Cat", "Dog", "Happy", "House", "Marvin", "Sheila", "Tree", "Wow".
In version 0.02 more words were added: "Backward", "Forward", "Follow", "Learn", "Visual".
In both versions, ten of them are used as commands by convention: "Yes", "No", "Up", "Down", "Left",
"Right", "On", "Off", "Stop", "Go". Other words are considered to be auxiliary (in current implementation
it is marked by `True` value of `"is_unknown"` feature). Their function is to teach a model to distinguish core words
from unrecognized ones.
This version is not yet supported.
The `_silence_` class contains a set of longer audio clips that are either recordings or
a mathematical simulation of noise. | @article{speechcommandsv2,
author = { {Warden}, P.},
title = "{Speech Commands: A Dataset for Limited-Vocabulary Speech Recognition}",
journal = {ArXiv e-prints},
archivePrefix = "arXiv",
eprint = {1804.03209},
primaryClass = "cs.CL",
keywords = {Computer Science - Computation and Language, Computer Science - Human-Computer Interaction},
year = 2018,
month = apr,
url = {https://arxiv.org/abs/1804.03209},
} | null | 0 | 17 | ---
annotations_creators:
- other
language_creators:
- crowdsourced
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
- 10K<n<100K
source_datasets:
- extended|speech_commands
task_categories:
- audio-classification
task_ids:
- keyword-spotting
pretty_name: SpeechCommands
config_names:
- v0.01
- v0.02
tags:
- spotlight
- enriched
- renumics
- enhanced
- audio
- classification
- extended
---
# Dataset Card for SpeechCommands
## Dataset Description
- **Homepage:** [Renumics Homepage](https://renumics.com/?hf-dataset-card=speech-commands-enriched)
- **GitHub** [Spotlight](https://github.com/Renumics/spotlight)
- **Dataset Homepage** [tensorflow.org/datasets](https://www.tensorflow.org/datasets/catalog/speech_commands)
- **Paper:** [Speech Commands: A Dataset for Limited-Vocabulary Speech Recognition](https://arxiv.org/pdf/1804.03209.pdf)
- **Leaderboard:** [More Information Needed]
### Dataset Summary
📊 [Data-centric AI](https://datacentricai.org) principles have become increasingly important for real-world use cases.
At [Renumics](https://renumics.com/?hf-dataset-card=speech-commands-enriched) we believe that classical benchmark datasets and competitions should be extended to reflect this development.
🔍 This is why we are publishing benchmark datasets with application-specific enrichments (e.g. embeddings, baseline results, uncertainties, label error scores). We hope this helps the ML community in the following ways:
1. Enable new researchers to quickly develop a profound understanding of the dataset.
2. Popularize data-centric AI principles and tooling in the ML community.
3. Encourage the sharing of meaningful qualitative insights in addition to traditional quantitative metrics.
📚 This dataset is an enriched version of the [SpeechCommands Dataset](https://huggingface.co/datasets/speech_commands).
### Explore the Dataset

The enrichments allow you to quickly gain insights into the dataset. The open source data curation tool [Renumics Spotlight](https://github.com/Renumics/spotlight) enables that with just a few lines of code:
Install datasets and Spotlight via [pip](https://packaging.python.org/en/latest/key_projects/#pip):
```python
!pip install renumics-spotlight datasets[audio]
```
> **_Notice:_** On Linux, non-Python dependency on libsndfile package must be installed manually. See [Datasets - Installation](https://huggingface.co/docs/datasets/installation#audio) for more information.
Load the dataset from huggingface in your notebook:
```python
import datasets
dataset = datasets.load_dataset("renumics/speech_commands_enriched", "v0.01")
```
[//]: <> (TODO: Update this!)
Start exploring with a simple view:
```python
from renumics import spotlight
df = dataset.to_pandas()
df_show = df.drop(columns=['audio'])
spotlight.show(df_show, port=8000, dtype={"file": spotlight.Audio})
```
You can use the UI to interactively configure the view on the data. Depending on the concrete tasks (e.g. model comparison, debugging, outlier detection) you might want to leverage different enrichments and metadata.
### SpeechCommands Dataset
This is a set of one-second .wav audio files, each containing a single spoken
English word or background noise. These words are from a small set of commands, and are spoken by a
variety of different speakers. This data set is designed to help train simple
machine learning models. It is covered in more detail at [https://arxiv.org/abs/1804.03209](https://arxiv.org/abs/1804.03209).
Version 0.01 of the data set (configuration `"v0.01"`) was released on August 3rd 2017 and contains
64,727 audio files.
Version 0.02 of the data set (configuration `"v0.02"`) was released on April 11th 2018 and
contains 105,829 audio files.
### Supported Tasks and Leaderboards
* `keyword-spotting`: the dataset can be used to train and evaluate keyword
spotting systems. The task is to detect preregistered keywords by classifying utterances
into a predefined set of words. The task is usually performed on-device for the
fast response time. Thus, accuracy, model size, and inference time are all crucial.
### Languages
The language data in SpeechCommands is in English (BCP-47 `en`).
## Dataset Structure
### Data Instances
Example of a core word (`"label"` is a word, `"is_unknown"` is `False`):
```python
{
"file": "no/7846fd85_nohash_0.wav",
"audio": {
"path": "no/7846fd85_nohash_0.wav",
"array": array([ -0.00021362, -0.00027466, -0.00036621, ..., 0.00079346,
0.00091553, 0.00079346]),
"sampling_rate": 16000
},
"label": 1, # "no"
"is_unknown": False,
"speaker_id": "7846fd85",
"utterance_id": 0
}
```
Example of an auxiliary word (`"label"` is a word, `"is_unknown"` is `True`)
```python
{
"file": "tree/8b775397_nohash_0.wav",
"audio": {
"path": "tree/8b775397_nohash_0.wav",
"array": array([ -0.00854492, -0.01339722, -0.02026367, ..., 0.00274658,
0.00335693, 0.0005188]),
"sampling_rate": 16000
},
"label": 28, # "tree"
"is_unknown": True,
"speaker_id": "1b88bf70",
"utterance_id": 0
}
```
Example of background noise (`_silence_`) class:
```python
{
"file": "_silence_/doing_the_dishes.wav",
"audio": {
"path": "_silence_/doing_the_dishes.wav",
"array": array([ 0. , 0. , 0. , ..., -0.00592041,
-0.00405884, -0.00253296]),
"sampling_rate": 16000
},
"label": 30, # "_silence_"
"is_unknown": False,
"speaker_id": "None",
"utterance_id": 0 # doesn't make sense here
}
```
### Data Fields
* `file`: relative audio filename inside the original archive.
* `audio`: dictionary containing a relative audio filename,
a decoded audio array, and the sampling rate. Note that when accessing
the audio column: `dataset[0]["audio"]` the audio is automatically decoded
and resampled to `dataset.features["audio"].sampling_rate`.
Decoding and resampling of a large number of audios might take a significant
amount of time. Thus, it is important to first query the sample index before
the `"audio"` column, i.e. `dataset[0]["audio"]` should always be preferred
over `dataset["audio"][0]`.
* `label`: either word pronounced in an audio sample or background noise (`_silence_`) class.
Note that it's an integer value corresponding to the class name.
* `is_unknown`: if a word is auxiliary. Equals to `False` if a word is a core word or `_silence_`,
`True` if a word is an auxiliary word.
* `speaker_id`: unique id of a speaker. Equals to `None` if label is `_silence_`.
* `utterance_id`: incremental id of a word utterance within the same speaker.
### Data Splits
The dataset has two versions (= configurations): `"v0.01"` and `"v0.02"`. `"v0.02"`
contains more words (see section [Source Data](#source-data) for more details).
| | train | validation | test |
|----- |------:|-----------:|-----:|
| v0.01 | 51093 | 6799 | 3081 |
| v0.02 | 84848 | 9982 | 4890 |
Note that in train and validation sets examples of `_silence_` class are longer than 1 second.
You can use the following code to sample 1-second examples from the longer ones:
```python
def sample_noise(example):
# Use this function to extract random 1 sec slices of each _silence_ utterance,
# e.g. inside `torch.utils.data.Dataset.__getitem__()`
from random import randint
if example["label"] == "_silence_":
random_offset = randint(0, len(example["speech"]) - example["sample_rate"] - 1)
example["speech"] = example["speech"][random_offset : random_offset + example["sample_rate"]]
return example
```
## Dataset Creation
### Curation Rationale
The primary goal of the dataset is to provide a way to build and test small
models that can detect a single word from a set of target words and differentiate it
from background noise or unrelated speech with as few false positives as possible.
### Source Data
#### Initial Data Collection and Normalization
The audio files were collected using crowdsourcing, see
[aiyprojects.withgoogle.com/open_speech_recording](https://github.com/petewarden/extract_loudest_section)
for some of the open source audio collection code that was used. The goal was to gather examples of
people speaking single-word commands, rather than conversational sentences, so
they were prompted for individual words over the course of a five minute
session.
In version 0.01 thirty different words were recoded: "Yes", "No", "Up", "Down", "Left",
"Right", "On", "Off", "Stop", "Go", "Zero", "One", "Two", "Three", "Four", "Five", "Six", "Seven", "Eight", "Nine",
"Bed", "Bird", "Cat", "Dog", "Happy", "House", "Marvin", "Sheila", "Tree", "Wow".
In version 0.02 more words were added: "Backward", "Forward", "Follow", "Learn", "Visual".
In both versions, ten of them are used as commands by convention: "Yes", "No", "Up", "Down", "Left",
"Right", "On", "Off", "Stop", "Go". Other words are considered to be auxiliary (in current implementation
it is marked by `True` value of `"is_unknown"` feature). Their function is to teach a model to distinguish core words
from unrecognized ones.
The `_silence_` label contains a set of longer audio clips that are either recordings or
a mathematical simulation of noise.
#### Who are the source language producers?
The audio files were collected using crowdsourcing.
### Annotations
#### Annotation process
Labels are the list of words prepared in advances.
Speakers were prompted for individual words over the course of a five minute
session.
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in this dataset.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
Creative Commons BY 4.0 License ((CC-BY-4.0)[https://creativecommons.org/licenses/by/4.0/legalcode]).
### Citation Information
```
@article{speechcommandsv2,
author = { {Warden}, P.},
title = "{Speech Commands: A Dataset for Limited-Vocabulary Speech Recognition}",
journal = {ArXiv e-prints},
archivePrefix = "arXiv",
eprint = {1804.03209},
primaryClass = "cs.CL",
keywords = {Computer Science - Computation and Language, Computer Science - Human-Computer Interaction},
year = 2018,
month = apr,
url = {https://arxiv.org/abs/1804.03209},
}
```
### Contributions
[More Information Needed] |
PaulineSanchez/Traduction_en_fr_food | 2023-04-24T17:18:08.000Z | [
"task_categories:translation",
"language:fr",
"language:en",
"region:us"
] | PaulineSanchez | null | null | null | 1 | 17 | ---
task_categories:
- translation
language:
- fr
- en
dataset_info:
features:
- name: alim_nom_fr
dtype: string
- name: alim_nom_eng
dtype: string
splits:
- name: train
num_bytes: 238948
num_examples: 3153
download_size: 114072
dataset_size: 238948
---
- info: This dataset comes from the ANSES-CIQUAL 2020 Table in English in XML format, found on https://www.data.gouv.fr/fr/datasets/table-de-composition-nutritionnelle-des-aliments-ciqual/ |
LennardZuendorf/openlegaldata-bulk-data | 2023-10-07T19:45:45.000Z | [
"task_categories:text-classification",
"task_categories:text-generation",
"size_categories:100K<n<1M",
"language:de",
"license:mit",
"legal",
"region:us"
] | LennardZuendorf | null | null | null | 3 | 17 | ---
license: mit
task_categories:
- text-classification
- text-generation
language:
- de
tags:
- legal
pretty_name: openlegaldata.io bulk case data
size_categories:
- 100K<n<1M
---
# Dataset Card for openlegaldata.io bulk case data
## Dataset Description
This is the copy of the lastest dump from [openlegaldata.io](https://de.openlegaldata.io/). I will try to keep this updated, since there is no offical Huggingface Dataset Repo.
- **Homepage:** [https://de.openlegaldata.io/](https://de.openlegaldata.io/)
- **Repository:** [Bulk Data](https://static.openlegaldata.io/dumps/de/)
### Dataset Summary
This is the openlegaldata bulk case download from October 2022. Please refer to the offical website (above) for any more information. I have not made any changes for it, since I use a different datasets to for projects.
### Languages
- German
## Additional Information
### Licensing/Citation Information
The [openlegaldata platform](https://github.com/openlegaldata/oldp) is licensed under the MIT license, you can access the dataset by citing the original source, [openlegaldata.io](https://de.openlegaldata.io/) |
ybelkada/food101-tiny | 2023-05-05T16:13:57.000Z | [
"region:us"
] | ybelkada | null | null | null | 0 | 17 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': apple_pie
'1': baby_back_ribs
'2': baklava
'3': beef_carpaccio
'4': beef_tartare
'5': beet_salad
'6': beignets
'7': bibimbap
'8': bread_pudding
'9': breakfast_burrito
'10': bruschetta
'11': caesar_salad
'12': cannoli
'13': caprese_salad
'14': carrot_cake
'15': ceviche
'16': cheesecake
'17': cheese_plate
'18': chicken_curry
'19': chicken_quesadilla
'20': chicken_wings
'21': chocolate_cake
'22': chocolate_mousse
'23': churros
'24': clam_chowder
'25': club_sandwich
'26': crab_cakes
'27': creme_brulee
'28': croque_madame
'29': cup_cakes
'30': deviled_eggs
'31': donuts
'32': dumplings
'33': edamame
'34': eggs_benedict
'35': escargots
'36': falafel
'37': filet_mignon
'38': fish_and_chips
'39': foie_gras
'40': french_fries
'41': french_onion_soup
'42': french_toast
'43': fried_calamari
'44': fried_rice
'45': frozen_yogurt
'46': garlic_bread
'47': gnocchi
'48': greek_salad
'49': grilled_cheese_sandwich
'50': grilled_salmon
'51': guacamole
'52': gyoza
'53': hamburger
'54': hot_and_sour_soup
'55': hot_dog
'56': huevos_rancheros
'57': hummus
'58': ice_cream
'59': lasagna
'60': lobster_bisque
'61': lobster_roll_sandwich
'62': macaroni_and_cheese
'63': macarons
'64': miso_soup
'65': mussels
'66': nachos
'67': omelette
'68': onion_rings
'69': oysters
'70': pad_thai
'71': paella
'72': pancakes
'73': panna_cotta
'74': peking_duck
'75': pho
'76': pizza
'77': pork_chop
'78': poutine
'79': prime_rib
'80': pulled_pork_sandwich
'81': ramen
'82': ravioli
'83': red_velvet_cake
'84': risotto
'85': samosa
'86': sashimi
'87': scallops
'88': seaweed_salad
'89': shrimp_and_grits
'90': spaghetti_bolognese
'91': spaghetti_carbonara
'92': spring_rolls
'93': steak
'94': strawberry_shortcake
'95': sushi
'96': tacos
'97': takoyaki
'98': tiramisu
'99': tuna_tartare
'100': waffles
splits:
- name: train
num_bytes: 5343359.0
num_examples: 100
download_size: 5256650
dataset_size: 5343359.0
---
# Dataset Card for "food101-tiny"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
d0rj/samsum-ru | 2023-05-13T06:44:23.000Z | [
"task_categories:summarization",
"annotations_creators:expert-generated",
"language_creators:translated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:samsum",
"language:ru",
"license:cc-by-nc-nd-4.0",
"conversations-summarization",
"arxiv:1911.12237",
"region:us... | d0rj | null | null | null | 2 | 17 | ---
annotations_creators:
- expert-generated
language_creators:
- translated
language:
- ru
license:
- cc-by-nc-nd-4.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- samsum
task_categories:
- summarization
task_ids: []
pretty_name: SAMSum Corpus (ru)
tags:
- conversations-summarization
dataset_info:
features:
- name: id
dtype: string
- name: dialogue
dtype: string
- name: summary
dtype: string
splits:
- name: train
num_bytes: 8598724
num_examples: 14731
- name: validation
num_bytes: 471632
num_examples: 818
- name: test
num_bytes: 483686
num_examples: 819
dataset_size: 9554042
train-eval-index:
- config: samsum
task: summarization
task_id: summarization
splits:
eval_split: test
col_mapping:
dialogue: text
summary: target
---
# Dataset Card for SAMSum Corpus (ru)
## Dataset Description
Translated [samsum](https://huggingface.co/datasets/samsum) dataset to russian language.
### Notes
> Row with ID **13828807** was deleted.
### Links
- **Homepage:** hhttps://arxiv.org/abs/1911.12237v2
- **Repository:** https://arxiv.org/abs/1911.12237v2
- **Paper:** https://arxiv.org/abs/1911.12237v2
### Languages
Russian (translated from English [samsum](https://huggingface.co/datasets/samsum) using Google Translator)
## Dataset Structure
### Data Fields
- dialogue: text of dialogue.
- summary: human written summary of the dialogue.
- id: unique file id of an example.
### Data Splits
- train: 14731
- val: 818
- test: 819
## Licensing Information
non-commercial licence: CC BY-NC-ND 4.0
## Citation Information
```
@inproceedings{gliwa-etal-2019-samsum,
title = "{SAMS}um Corpus: A Human-annotated Dialogue Dataset for Abstractive Summarization",
author = "Gliwa, Bogdan and
Mochol, Iwona and
Biesek, Maciej and
Wawer, Aleksander",
booktitle = "Proceedings of the 2nd Workshop on New Frontiers in Summarization",
month = nov,
year = "2019",
address = "Hong Kong, China",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/D19-5409",
doi = "10.18653/v1/D19-5409",
pages = "70--79"
}
``` |
hongerzh/NFT | 2023-09-28T06:00:22.000Z | [
"region:us"
] | hongerzh | null | null | null | 0 | 17 | Entry not found |
Mutonix/RefGPT-Fact | 2023-05-30T13:33:07.000Z | [
"task_categories:conversational",
"size_categories:10K<n<100K",
"language:zh",
"language:en",
"license:apache-2.0",
"arxiv:2305.14994",
"region:us"
] | Mutonix | null | null | null | 9 | 17 | ---
license: apache-2.0
dataset_info:
features:
- name: dialogue
dtype: string
- name: reference
dtype: string
- name: language
dtype: string
- name: type
dtype: string
splits:
- name: zh
num_bytes: 180760081
num_examples: 50000
- name: en
num_bytes: 464054853
num_examples: 50000
download_size: 260969665
dataset_size: 644814934
task_categories:
- conversational
language:
- zh
- en
arxiv: https://arxiv.org/abs/2305.14994
size_categories:
- 10K<n<100K
---
# Dataset Card for RefGPT-Fact
## Dataset Description
- **Homepage:**
- **Repository:** [https://github.com/ziliwangnlp/RefGPT](https://github.com/ziliwangnlp/RefGPT)
- **Paper:** [https://arxiv.org/abs/2305.14994](https://arxiv.org/abs/2305.14994)
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
<p align="center">
<a href="https://arxiv.org/abs/2305.14994"><b>[Paper] RefGPT</b></a> |
<a href="https://github.com/ziliwangnlp/RefGPT"><b>[Github] RefGPT</b></a>
</p>
RefGPT-Fact is a datasets containing 100k multi-turn dialogues about factual knowledge with 50k English and 50k Chinese. The English version uses the English Wikipedia as the reference and the Chinese version uses the frequently-used Chinese online encyclopedia website, Baidu Baike.
### Supported Tasks and Leaderboards
Chatbot instruction finetuning
### Languages
Chinese, English
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
Please pay attention that RefGPT Datasets, including RefGPT-Fact and RefGPT-Code, have not undergone manual verification, and as such, their security cannot be strictly guaranteed. Users should be aware that they are responsible for the results generated using this data.
### Discussion of Biases
As the datasets RefGPT-Fact and RefGPT-Code are collected by using the references like Wikipedia and Github repositories, it can not be avoided that the reference itself has factual errors, typos, or bugs and malicious code if it is from Github repositories. The datasets may also reflect the biases of the selected references and GPT-3.5/GPT-4 model
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```bibtex
@misc{yang2023refgpt,
title={RefGPT: Reference -> Truthful & Customized Dialogues Generation by GPTs and for GPTs},
author={Dongjie Yang and Ruifeng Yuan and YuanTao Fan and YiFei Yang and Zili Wang and Shusen Wang and Hai Zhao},
year={2023},
eprint={2305.14994},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Contributions
[More Information Needed] |
openchat/openchat_sharegpt4_dataset | 2023-07-01T13:20:31.000Z | [
"task_categories:conversational",
"task_categories:text-generation",
"size_categories:1K<n<10K",
"language:en",
"region:us"
] | openchat | null | null | null | 102 | 17 | ---
task_categories:
- conversational
- text-generation
language:
- en
pretty_name: OpenChat
size_categories:
- 1K<n<10K
---
This repository contains cleaned and filtered ShareGPT GPT-4 data used to train OpenChat. Details can be found in the [OpenChat repository](https://github.com/imoneoi/openchat). |
notable12/AICamp-2023-Skin-Conditions-Dataset | 2023-06-19T17:45:17.000Z | [
"license:mit",
"region:us"
] | notable12 | null | null | null | 1 | 17 | ---
license: mit
---
|
TrainingDataPro/cars-video-object-tracking | 2023-09-20T14:58:57.000Z | [
"task_categories:image-segmentation",
"task_categories:image-classification",
"language:en",
"license:cc-by-nc-nd-4.0",
"code",
"region:us"
] | TrainingDataPro | The collection of overhead video frames, capturing various types of vehicles
traversing a roadway. The dataset inculdes light vehicles (cars) and
heavy vehicles (minivan). | @InProceedings{huggingface:dataset,
title = {cars-video-object-tracking},
author = {TrainingDataPro},
year = {2023}
} | null | 2 | 17 | ---
license: cc-by-nc-nd-4.0
task_categories:
- image-segmentation
- image-classification
language:
- en
tags:
- code
dataset_info:
features:
- name: image_id
dtype: int32
- name: image
dtype: image
- name: mask
dtype: image
- name: annotations
dtype: string
splits:
- name: train
num_bytes: 614230158
num_examples: 100
download_size: 580108296
dataset_size: 614230158
---
# Cars Tracking
The collection of overhead video frames, capturing various types of vehicles traversing a roadway. The dataset inculdes light vehicles (cars) and heavy vehicles (minivan).
# Get the dataset
### This is just an example of the data
Leave a request on [**https://trainingdata.pro/data-market**](https://trainingdata.pro/data-market?utm_source=huggingface&utm_medium=cpc&utm_campaign=cars-video-object-tracking) to discuss your requirements, learn about the price and buy the dataset.

# Data Format
Each video frame from `images` folder is paired with an `annotations.xml` file that meticulously defines the tracking of each vehicle using polygons.
These annotations not only specify the location and path of each vehicle but also differentiate between the vehicle classes:
- cars,
- minivans.
The data labeling is visualized in the `boxes` folder.
# Example of the XML-file

# Object tracking is made in accordance with your requirements.
## **[TrainingData](https://trainingdata.pro/data-market?utm_source=huggingface&utm_medium=cpc&utm_campaign=cars-video-object-tracking)** provides high-quality data annotation tailored to your needs
More datasets in TrainingData's Kaggle account: **https://www.kaggle.com/trainingdatapro/datasets**
TrainingData's GitHub: **https://github.com/Trainingdata-datamarket/TrainingData_All_datasets** |
neural-bridge/cqa_dev | 2023-10-04T20:10:33.000Z | [
"task_categories:question-answering",
"size_categories:n<1K",
"language:en",
"region:us"
] | neural-bridge | null | null | null | 0 | 17 | ---
task_categories:
- question-answering
language:
- en
pretty_name: s
size_categories:
- n<1K
---
# Development Dataset for Falcon-40B
This is a development dataset consisting of ten samples that are prepared on various topics. It is used for checking whether a model that is fine-tuned for the context-question-answer task can generate satisfying responses using the given context and question. |
Delius/ChineseWebNovel | 2023-07-14T07:30:07.000Z | [
"task_categories:text-generation",
"size_categories:1K<n<10K",
"language:zh",
"license:apache-2.0",
"region:us"
] | Delius | null | null | null | 6 | 17 | ---
license: apache-2.0
task_categories:
- text-generation
language:
- zh
size_categories:
- 1K<n<10K
---
Chinese Web Novel Dataset
Summarized by claude but converted the order for novel text extension task.
WARNING!! Please be aware of the context length!!! |
bigheiniuJ/ChatGPTAug | 2023-07-23T00:06:08.000Z | [
"region:us"
] | bigheiniuJ | null | null | null | 0 | 17 | ---
dataset_info:
features:
- name: label
dtype: string
- name: instance_text
dtype: string
- name: seed
dtype: string
- name: split
dtype: string
- name: task
dtype: string
- name: id
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: dev
num_bytes: 263432
num_examples: 2205
- name: test
num_bytes: 6590715
num_examples: 45315
- name: train
num_bytes: 278076
num_examples: 2250
download_size: 3148358
dataset_size: 7132223
---
# Dataset Card for "ChatGPTAug"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
FreedomIntelligence/MMLU_Korean | 2023-08-06T08:06:43.000Z | [
"language:ko",
"license:mit",
"region:us"
] | FreedomIntelligence | null | null | null | 2 | 17 | ---
license: mit
language:
- ko
---
Korean version of MMLU dataset tranlasted by gpt-3.5-turbo.
The dataset is used in the research related to [MultilingualSIFT](https://github.com/FreedomIntelligence/MultilingualSIFT). |
jondurbin/airoboros-gpt4-2.0 | 2023-07-30T08:30:24.000Z | [
"license:other",
"region:us"
] | jondurbin | null | null | null | 14 | 17 | ---
license: other
---
## Overview
This is a brand new dataset, with nothing copied from the 1.* series of airoboros, using only the June version of gpt-4.
I used the latest overhaul of the airoboros python tool to generate the data, which has several "instructions", where an instructor is a specific prompt/response generator.
The instructors include:
- agent/function style prompts, which generate a function name and args based on the provided input and available functions in either JSON or YAML format
- model/scenario/character cards, to help build random descriptive cards based on a template
- coding and scripting
- contextual q&a with the specific context obedient formatting
- chain-of-thought, i.e. for a given question, generate ~3 possible solutions, rank them, select the best
- experience, e.g. guided meditations or describing a walk through a forest
- general - completely random tasks not specifically targetting any type of task, using a random list of topics
- jokes - still horrible, but at least there are some now
- orca, i.e. "Solve [problem], provide step-by-step reasoning."
- execution planning, specifically the reWOO style, where you describe a list of available functions and it will generate a plan to make use of them
- riddles - still not great either, but present
- roleplay
- songs
- wordgames, e.g. give me a list of 28 words that start with 'cr'
- creative writing
**Is it better than 1.4?**
Not necessarily. It has some extra functionality that didn't exist before, but if you want to be sure you don't lose much, check out m2.0, with is a merge of 1.4.1 and 2.0:
https://huggingface.co/datasets/jondurbin/airoboros-gpt4-m2.0
The main point here was to test the June version of gpt-4 against the March version (and add new prompt types).
### Category breakdown

### Configuration for airoboros
https://gist.github.com/jondurbin/65df002c16560899e05365ca6cbd43e3
### Licence and usage restrictions
The data was generated by gpt-4 via OpenAI API calls.
The ToS for OpenAI API usage has a clause preventing the output from being used to train a model that __competes__ with OpenAI
- what does *compete* actually mean here?
- these small open source models will not produce output anywhere near the quality of gpt-4, or even gpt-3.5, so I can't imagine this could credibly be considered competing in the first place
- if someone else uses the dataset to do the same, they wouldn't necessarily be violating the ToS because they didn't call the API, so I don't know how that works
- the training data used in essentially all large language models includes a significant of copyrighted or otherwise unallowable licensing in the first place
- other work using the self-instruct method, e.g. the original here: https://github.com/yizhongw/self-instruct released the data and model as apache-2
I am purposingly leaving this license ambiguous (other than the fact you must comply with the Meta original license) because I am not a lawyer and refuse to attempt to interpret all of the terms accordingly.
Your best bet is probably to avoid using this commercially due to the OpenAI API usage.
Either way, by using this model, you agree to completely idemnify me from any and all license related issues.
Attribution would be nice if you use some or all of the data. |
collectiveai/drive-thru-generated-utterance-action-list-v2 | 2023-07-26T17:00:17.000Z | [
"region:us"
] | collectiveai | null | null | null | 0 | 17 | ---
dataset_info:
features:
- name: utterance
dtype: string
- name: actions
dtype: string
splits:
- name: train_clean
num_bytes: 95120
num_examples: 552
- name: train_dirty
num_bytes: 95232
num_examples: 552
- name: test_clean
num_bytes: 11769
num_examples: 69
- name: test_dirty
num_bytes: 11790
num_examples: 69
- name: val_clean
num_bytes: 11570
num_examples: 70
- name: val_dirty
num_bytes: 11595
num_examples: 70
download_size: 94376
dataset_size: 237076
---
# Dataset Card for "drive-thru-generated-utterance-action-list-v2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
atmallen/inventions_azaria_mitchell | 2023-07-28T20:11:14.000Z | [
"region:us"
] | atmallen | null | null | null | 0 | 17 | ---
dataset_info:
features:
- name: statement
dtype: string
- name: label
dtype:
class_label:
names:
'0': 'false'
'1': 'true'
splits:
- name: train
num_bytes: 36994.520547945205
num_examples: 700
- name: test
num_bytes: 9301.479452054795
num_examples: 176
download_size: 21827
dataset_size: 46296.0
---
# Dataset Card for "inventions_azaria_mitchell"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
atmallen/cities_azaria_mitchell | 2023-07-28T20:11:26.000Z | [
"region:us"
] | atmallen | null | null | null | 0 | 17 | ---
dataset_info:
features:
- name: statement
dtype: string
- name: label
dtype:
class_label:
names:
'0': 'false'
'1': 'true'
splits:
- name: train
num_bytes: 374056.8
num_examples: 8000
- name: test
num_bytes: 93514.2
num_examples: 2000
download_size: 155735
dataset_size: 467571.0
---
# Dataset Card for "cities_azaria_mitchell"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.