author stringlengths 2 29 ⌀ | cardData null | citation stringlengths 0 9.58k ⌀ | description stringlengths 0 5.93k ⌀ | disabled bool 1 class | downloads float64 1 1M ⌀ | gated bool 2 classes | id stringlengths 2 108 | lastModified stringlengths 24 24 | paperswithcode_id stringlengths 2 45 ⌀ | private bool 2 classes | sha stringlengths 40 40 | siblings list | tags list | readme_url stringlengths 57 163 | readme stringlengths 0 977k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
helliun | null | null | null | false | null | false | helliun/testtyt | 2022-11-15T18:00:42.000Z | null | false | bb1fff2db16bd92b2b658a9d37a720c720d8844b | [] | [] | https://huggingface.co/datasets/helliun/testtyt/resolve/main/README.md | ---
dataset_info:
features:
- name: id
dtype: string
- name: channel
dtype: string
- name: channel_id
dtype: string
- name: title
dtype: string
- name: categories
sequence: string
- name: tags
sequence: string
- name: description
dtype: string
- name: text
dtype: string
- name: segments
list:
- name: end
dtype: float64
- name: start
dtype: float64
- name: text
dtype: string
splits:
- name: train
num_bytes: 2138
num_examples: 1
download_size: 11227
dataset_size: 2138
---
# Dataset Card for "testtyt"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
juancopi81 | null | null | null | false | 3 | false | juancopi81/test_whisper_test | 2022-11-15T21:57:02.000Z | null | false | 43d716dc64f9ede73658c2a57c66de81ca7afe95 | [] | [] | https://huggingface.co/datasets/juancopi81/test_whisper_test/resolve/main/README.md | ---
dataset_info:
features:
- name: CHANNEL_NAME
dtype: string
- name: URL
dtype: string
- name: TITLE
dtype: string
- name: DESCRIPTION
dtype: string
- name: TRANSCRIPTION
dtype: string
- name: SEGMENTS
dtype: string
splits:
- name: train
num_bytes: 32551
num_examples: 8
download_size: 39136
dataset_size: 32551
---
# Dataset Card for "test_whisper_test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
saraimarte | null | null | null | false | null | false | saraimarte/flowerVase | 2022-11-15T20:19:00.000Z | null | false | 9fa2b05e61f41ef0537b9c2ba7ec3f49e6e1fa8c | [] | [
"license:other"
] | https://huggingface.co/datasets/saraimarte/flowerVase/resolve/main/README.md | ---
license: other
---
|
amydeng2000 | null | null | null | false | 19 | false | amydeng2000/strategy-qa | 2022-11-16T00:46:36.000Z | null | false | 9ed9a02b7646d4e7be0d5d3289f867384eda76b5 | [] | [
"language_creators:found"
] | https://huggingface.co/datasets/amydeng2000/strategy-qa/resolve/main/README.md | ---
annotations_creators: []
language: []
language_creators:
- found
license: []
multilinguality: []
pretty_name: StrategyQA dataset from the Allen Institute
size_categories: []
source_datasets: []
tags: []
task_categories: []
task_ids: []
---
# Dataset Card for [Dataset Name]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
Examples in the datasets are stored in the following format:
- qid: Question ID.
- term: The Wikipedia term used to prime the question writer.
- description: A short description of the term, extracted from Wikipedia.
- question: A strategy question.
- answer: A boolean answer to the question (True/False for “Yes”/“No”).
- facts: (Noisy) facts provided by the question writer in order to guide the following annotation tasks (see more details in the paper).
- decomposition: A sequence (list) of single-step questions that form a reasoning process for answering the question. References to answers to previous steps are marked with “#”. Further explanations can be found in the paper.
- evidence: A list with 3 annotations, each annotation have matched evidence for each decomposition step. Evidence for a decomposition step is a list with paragraph IDs and potentially the reserved tags no_evidence and operation.
The file strategyqa_train_filtered.json does not include annotations of facts, decomposition, and evidence, and the public test examples in strategyqa_test.json include only the fields qid and question.
### Data Splits
- strategyqa_train.json: The training set of StrategyQA, which includes 2,290 examples.
- strategyqa_train_paragraphs.json: Paragraphs from our corpus that were matched as evidence for examples in the training set.
- strategyqa_train_filtered.json: 2,821 additional questions, excluded from the official training set, that were filtered by our solvers during data collection (see more details in the paper).
- strategyqa_test.json: The test set of StrategyQA, which includes 490 examples.
|
shaurya0512 | null | null | null | false | null | false | shaurya0512/acl-anthology-corpus | 2022-11-16T00:27:05.000Z | acronym-identification | false | f0c9ce1e63bce1daca83570f8f30f5c430ef9da8 | [] | [
"language:en",
"language_creators:found",
"license:mit",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"tags:research papers",
"tags:acl",
"task_categories:token-classification"
] | https://huggingface.co/datasets/shaurya0512/acl-anthology-corpus/resolve/main/README.md | ---
annotations_creators: []
language:
- en
language_creators:
- found
license:
- mit
multilinguality:
- monolingual
paperswithcode_id: acronym-identification
pretty_name: acl-anthology-corpus
size_categories:
- 10K<n<100K
source_datasets:
- original
tags:
- research papers
- acl
task_categories:
- token-classification
task_ids: []
train-eval-index:
- col_mapping:
labels: tags
tokens: tokens
config: default
splits:
eval_split: test
task: token-classification
task_id: entity_extraction
---
# Dataset Card for ACL Anthology Corpus
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Dataset Creation](#dataset-creation)
- [Source Data](#source-data)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** https://github.com/shauryr/ACL-anthology-corpus
- **Point of Contact:** shauryr@gmail.com
### Dataset Summary
Dataframe with extracted metadata (table below with details) and full text of the collection for analysis : **size 489M**
### Languages
en, zh and others (TODO: find the languages in ACL)
## Dataset Structure
Dataframe
### Data Instances
Each row is a paper from ACL anthology
### Data Fields
| **Column name** | **Description** |
| :---------------: | :---------------------------: |
| `acl_id` | unique ACL id |
| `abstract` | abstract extracted by GROBID |
| `full_text` | full text extracted by GROBID |
| `corpus_paper_id` | Semantic Scholar ID |
| `pdf_hash` | sha1 hash of the pdf |
| `numcitedby` | number of citations from S2 |
| `url` | link of publication |
| `publisher` | - |
| `address` | Address of conference |
| `year` | - |
| `month` | - |
| `booktitle` | - |
| `author` | list of authors |
| `title` | title of paper |
| `pages` | - |
| `doi` | - |
| `number` | - |
| `volume` | - |
| `journal` | - |
| `editor` | - |
| `isbn` | - |
## Dataset Creation
The corpus has all the paper in ACL anthology - as of September'22
### Source Data
- [ACL Anthology](aclanthology.org)
- [Semantic Scholar](semanticscholar.org)
# Additional Information
### Licensing Information
ACL anthology corpus is released under the [CC BY-NC 4.0](https://creativecommons.org/licenses/by-nc/4.0/). By using this corpus, you are agreeing to its usage terms.
### Citation Information
If you use this corpus in your research please use the following BibTeX entry:
@Misc{acl_anthology_corpus,
author = {Shaurya Rohatgi},
title = {ACL Anthology Corpus with Full Text},
howpublished = {Github},
year = {2022},
url = {https://github.com/shauryr/ACL-anthology-corpus}
}
### Acknowledgements
We thank Semantic Scholar for providing access to the citation related data in this corpus.
### Contributions
Thanks to [@shauryr](https://github.com/shauryr) and [Yanxia Qin](https://github.com/qolina) for adding this dataset.
|
SergiiGurbych | null | null | null | false | 1 | false | SergiiGurbych/sent_anal_ukr_binary | 2022-11-15T23:45:48.000Z | null | false | b6304accd4a4626f91b52e1a2b3187149636478a | [] | [] | https://huggingface.co/datasets/SergiiGurbych/sent_anal_ukr_binary/resolve/main/README.md | This dataset for Ukrainian language contains 200 original sentences marked manually with 0 (negative) and 1 (positive). |
amydeng2000 | null | null | null | false | 4 | false | amydeng2000/CREAK | 2022-11-16T01:44:06.000Z | null | false | 3600013a4e003d07bfd692e1d156bcc3a6333421 | [] | [] | https://huggingface.co/datasets/amydeng2000/CREAK/resolve/main/README.md | Home page & Original source: https://github.com/yasumasaonoe/creak |
Tristan | null | null | null | false | null | false | Tristan/olm-october-2022-tokenized-512 | 2022-11-16T01:47:11.000Z | null | false | ea91f2e742ddc5791c57f27b2939a836e43314ba | [] | [] | https://huggingface.co/datasets/Tristan/olm-october-2022-tokenized-512/resolve/main/README.md | ---
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: special_tokens_mask
sequence: int8
splits:
- name: train
num_bytes: 79589759460
num_examples: 25807315
download_size: 21375344353
dataset_size: 79589759460
---
# Dataset Card for "olm-october-2022-tokenized-512"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
juancopi81 | null | null | null | false | null | false | juancopi81/diana_uribe | 2022-11-16T20:11:44.000Z | null | false | c2a30c4c022f98a5ae3f600f696e301677db89d7 | [] | [] | https://huggingface.co/datasets/juancopi81/diana_uribe/resolve/main/README.md | ---
dataset_info:
features:
- name: CHANNEL_NAME
dtype: string
- name: URL
dtype: string
- name: TITLE
dtype: string
- name: DESCRIPTION
dtype: string
- name: TRANSCRIPTION
dtype: string
- name: SEGMENTS
dtype: string
splits:
- name: train
num_bytes: 1826220
num_examples: 27
download_size: 894542
dataset_size: 1826220
---
# Dataset Card for "diana_uribe"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Tristan | null | null | null | false | 33 | false | Tristan/olm-october-2022-tokenized-1024 | 2022-11-16T02:50:17.000Z | null | false | 8e54aa032996e146b47b98d91a8ce414a616b554 | [] | [] | https://huggingface.co/datasets/Tristan/olm-october-2022-tokenized-1024/resolve/main/README.md | ---
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: special_tokens_mask
sequence: int8
splits:
- name: train
num_bytes: 79468727400
num_examples: 12909150
download_size: 21027268683
dataset_size: 79468727400
---
# Dataset Card for "olm-october-2022-tokenized-1024"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Fazzie | null | null | null | false | null | false | Fazzie/Teyvat | 2022-11-16T09:55:55.000Z | null | false | d8bb40ec2efe1622bfdff93b7fb17c9dc75b6660 | [] | [] | https://huggingface.co/datasets/Fazzie/Teyvat/resolve/main/README.md | # Dataset Card for [Dataset Name]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset. |
illorg | null | null | null | false | null | false | illorg/espn | 2022-11-16T04:59:09.000Z | null | false | 1c510d8fba5836df9983f4600a832f226667892d | [] | [] | https://huggingface.co/datasets/illorg/espn/resolve/main/README.md | ---
dataset_info:
features:
- name: CHANNEL_NAME
dtype: string
- name: URL
dtype: string
- name: TITLE
dtype: string
- name: DESCRIPTION
dtype: string
- name: TRANSCRIPTION
dtype: string
- name: SEGMENTS
dtype: string
splits:
- name: train
num_bytes: 44761
num_examples: 4
download_size: 28603
dataset_size: 44761
---
# Dataset Card for "espn"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
alanila | null | null | null | false | null | false | alanila/autotrain-data-mm | 2022-11-16T06:27:30.000Z | null | false | 7e5ded70f2d2bb9ce0119a4c11507aad4205b5f6 | [] | [] | https://huggingface.co/datasets/alanila/autotrain-data-mm/resolve/main/README.md | ---
task_categories:
- conditional-text-generation
---
# AutoTrain Dataset for project: mm
## Dataset Description
This dataset has been automatically processed by AutoTrain for project mm.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"text": "Email from attorney A Dutkanych regarding executed Settlement Agreement",
"target": "Email from attorney A Dutkanych regarding executed Settlement Agreement"
},
{
"text": "Telephone conference with A Royer regarding additional factual background information relating to O Stapletons Charge of Discrimination allegations",
"target": "Telephone conference with A Royer regarding additional factual background information as to O Stapletons Charge of Discrimination allegations"
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"text": "Value(dtype='string', id=None)",
"target": "Value(dtype='string', id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 88 |
| valid | 22 |
|
mike008 | null | null | null | false | null | false | mike008/wedo | 2022-11-16T08:07:12.000Z | null | false | 9bed3be927cdb7ff24e120ba77ddca329fe3f868 | [] | [
"license:openrail"
] | https://huggingface.co/datasets/mike008/wedo/resolve/main/README.md | ---
license: openrail
---
|
BlackKakapo | null | null | null | false | null | false | BlackKakapo/paraphrase-ro | 2022-11-16T08:01:31.000Z | null | false | 002b234cd1d693c25ba6dd2ebbf3072f6db0653c | [] | [
"license:apache-2.0",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"language:ro",
"task_ids:paraphrase"
] | https://huggingface.co/datasets/BlackKakapo/paraphrase-ro/resolve/main/README.md | ---
license: apache-2.0
multilinguality: monolingual
size_categories: 10K<n<100K
language: ro
task_ids: [paraphrase]
---
# Romanian paraphrase dataset
This data set was created by me, special for paraphrase
[t5-small-paraphrase-ro](https://huggingface.co/BlackKakapo/t5-small-paraphrase-ro)
[t5-small-paraphrase-ro-v2](https://huggingface.co/BlackKakapo/t5-small-paraphrase-ro-v2)
[t5-base-paraphrase-ro](https://huggingface.co/BlackKakapo/t5-base-paraphrase-ro)
[t5-base-paraphrase-ro-v2](https://huggingface.co/BlackKakapo/t5-base-paraphrase-ro-v2)
Here you can find ~100k examples of paraphrase. |
BlackKakapo | null | null | null | false | null | false | BlackKakapo/grammar-ro | 2022-11-16T08:05:43.000Z | null | false | 235703e57fd035ada8ad10560e34e3b6d7807228 | [] | [
"license:apache-2.0",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"language:ro",
"task_ids:grammar"
] | https://huggingface.co/datasets/BlackKakapo/grammar-ro/resolve/main/README.md | ---
license: apache-2.0
multilinguality: monolingual
size_categories: 10K<n<100K
language: ro
task_ids: [grammar]
---
# Romanian grammar dataset
This data set was created by me, special for grammar
Here you can find:
~1600k examples of grammar (TRAIN).
~220k examples of grammar (TEST). |
minoassad | null | null | null | false | null | false | minoassad/SDhistory | 2022-11-16T11:22:39.000Z | null | false | 24c8d54d053939109baa89668c6f8a8ea9b0bdc5 | [] | [
"license:afl-3.0"
] | https://huggingface.co/datasets/minoassad/SDhistory/resolve/main/README.md | ---
license: afl-3.0
---
|
Whispering-GPT | null | null | null | false | null | false | Whispering-GPT/whisper-transcripts-linustechtips | 2022-11-16T08:57:43.000Z | null | false | 33aa0ee4153046aa60981e063378f10f3ba8b614 | [] | [] | https://huggingface.co/datasets/Whispering-GPT/whisper-transcripts-linustechtips/resolve/main/README.md | ---
dataset_info:
features:
- name: id
dtype: string
- name: channel
dtype: string
- name: channel_id
dtype: string
- name: title
dtype: string
- name: categories
sequence: string
- name: tags
sequence: string
- name: description
dtype: string
- name: text
dtype: string
- name: segments
list:
- name: start
dtype: float64
- name: end
dtype: float64
- name: text
dtype: string
splits:
- name: train
num_bytes: 108005056
num_examples: 2950
download_size: 62310446
dataset_size: 108005056
---
# Dataset Card for "whisper-transcripts-linustechtips"
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Whispering-GPT](https://github.com/matallanas/whisper_gpt_pipeline)
- **Repository:** [whisper_gpt_pipeline](https://github.com/matallanas/whisper_gpt_pipeline)
- **Paper:** [whisper](https://cdn.openai.com/papers/whisper.pdf) and [gpt](https://s3-us-west-2.amazonaws.com/openai-assets/research-covers/language-unsupervised/language_understanding_paper.pdf)
- **Point of Contact:** [Whispering-GPT organization](https://huggingface.co/Whispering-GPT)
### Dataset Summary
This dataset is created by applying whisper to the videos of the Youtube channel [Linus Tech Tips](https://www.youtube.com/channel/UCXuqSBlHAE6Xw-yeJA0Tunw). The dataset was created a medium size whisper model.
### Languages
- **Language**: English
## Dataset Structure
The dataset
### Data Fields
The dataset is composed by:
- **id**: Id of the youtube video.
- **channel**: Name of the channel.
- **channel\_id**: Id of the youtube channel.
- **title**: Title given to the video.
- **categories**: Category of the video.
- **description**: Description added by the author.
- **text**: Whole transcript of the video.
- **segments**: A list with the time and transcription of the video.
- **start**: When started the trancription.
- **end**: When the transcription ends.
- **text**: The text of the transcription.
### Data Splits
- Train split.
## Dataset Creation
### Source Data
The transcriptions are from the videos of [Linus Tech Tips Channel](https://www.youtube.com/channel/UCXuqSBlHAE6Xw-yeJA0Tunw)
### Contributions
Thanks to [Whispering-GPT](https://huggingface.co/Whispering-GPT) organization for adding this dataset. |
iwaaaaa | null | null | null | false | null | false | iwaaaaa/aleechan | 2022-11-16T08:53:38.000Z | null | false | a5b6dea1da418d7d505d261a5946055ee46d7a74 | [] | [
"license:artistic-2.0"
] | https://huggingface.co/datasets/iwaaaaa/aleechan/resolve/main/README.md | ---
license: artistic-2.0
---
|
jpwahle | null | @inproceedings{kovatchev-etal-2018-etpc,
title = "{ETPC} - A Paraphrase Identification Corpus Annotated with Extended Paraphrase Typology and Negation",
author = "Kovatchev, Venelin and
Mart{\'\i}, M. Ant{\`o}nia and
Salam{\'o}, Maria",
booktitle = "Proceedings of the Eleventh International Conference on Language Resources and Evaluation ({LREC} 2018)",
month = may,
year = "2018",
address = "Miyazaki, Japan",
publisher = "European Language Resources Association (ELRA)",
url = "https://aclanthology.org/L18-1221",
} | The EPT typology addresses several practical limitations of existing paraphrase typologies: it is the first typology that copes with the non-paraphrase pairs in the paraphrase identification corpora and distinguishes between contextual and habitual paraphrase types. ETPC is the largest corpus to date annotated with atomic paraphrase types. | false | null | false | jpwahle/etpc | 2022-11-16T08:55:07.000Z | null | false | 2361927af37c135f4f40aeb222676722689009e1 | [] | [
"annotations_creators:crowdsourced",
"language_creators:found",
"language:en",
"license:unknown",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"task_categories:text-classification",
"task_ids:sentiment-classification"
] | https://huggingface.co/datasets/jpwahle/etpc/resolve/main/README.md | ---
annotations_creators:
- crowdsourced
language_creators:
- found
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- sentiment-classification
pretty_name: Extended Paraphrase Typology Corpus
---
# Dataset Card for [Dataset Name]
## Table of Contents
- [Dataset Card for [Dataset Name]](#dataset-card-for-dataset-name)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://github.com/venelink/ETPC/
- **Repository:**
- **Paper:** [ETPC - A Paraphrase Identification Corpus Annotated with Extended Paraphrase Typology and Negation](http://www.lrec-conf.org/proceedings/lrec2018/pdf/661.pdf)
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
We present the Extended Paraphrase Typology (EPT) and the Extended Typology Paraphrase Corpus (ETPC). The EPT typology addresses several practical limitations of existing paraphrase typologies: it is the first typology that copes with the non-paraphrase pairs in the paraphrase identification corpora and distinguishes between contextual and habitual paraphrase types. ETPC is the largest corpus to date annotated with atomic paraphrase types. It is the first corpus with detailed annotation of both the paraphrase and the non-paraphrase pairs and the first corpus annotated with paraphrase and negation. Both new resources contribute to better understanding the paraphrase phenomenon, and allow for studying the relationship between paraphrasing and negation. To the developers of Paraphrase Identification systems ETPC corpus offers better means for evaluation and error analysis. Furthermore, the EPT typology and ETPC corpus emphasize the relationship with other areas of NLP such as Semantic Similarity, Textual Entailment, Summarization and Simplification.
### Supported Tasks and Leaderboards
- `text-classification`
### Languages
The text in the dataset is in English (`en`).
## Dataset Structure
### Data Fields
- `idx`: Monotonically increasing index ID.
- `sentence1`: Complete sentence expressing an opinion about a film.
- `sentence2`: Complete sentence expressing an opinion about a film.
- `etpc_label`: Whether the text-pair is a paraphrase, either "yes" (1) or "no" (0) according to etpc annotation schema.
- `mrpc_label`: Whether the text-pair is a paraphrase, either "yes" (1) or "no" (0) according to mrpc annotation schema.
- `negation`: Whether on sentence is a negation of another, either "yes" (1) or "no" (0).
### Data Splits
train: 5801
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
Rotten Tomatoes reviewers.
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
Unknown.
### Citation Information
```bibtex
@inproceedings{kovatchev-etal-2018-etpc,
title = "{ETPC} - A Paraphrase Identification Corpus Annotated with Extended Paraphrase Typology and Negation",
author = "Kovatchev, Venelin and
Mart{\'\i}, M. Ant{\`o}nia and
Salam{\'o}, Maria",
booktitle = "Proceedings of the Eleventh International Conference on Language Resources and Evaluation ({LREC} 2018)",
month = may,
year = "2018",
address = "Miyazaki, Japan",
publisher = "European Language Resources Association (ELRA)",
url = "https://aclanthology.org/L18-1221",
}
```
### Contributions
Thanks to [@jpwahle](https://github.com/jpwahle) for adding this dataset. |
thefivespace | null | null | null | false | null | false | thefivespace/dashandataset | 2022-11-16T08:59:20.000Z | null | false | 84b8c52511486ba4fd5eb145ffbe4e693fba552c | [] | [
"license:apache-2.0"
] | https://huggingface.co/datasets/thefivespace/dashandataset/resolve/main/README.md | ---
license: apache-2.0
---
|
Jzuluaga | null | null | null | false | null | false | Jzuluaga/atcosim_corpus | 2022-11-16T09:15:19.000Z | null | false | f38e83de8a72200c4da0473f6db57b16f8235923 | [] | [] | https://huggingface.co/datasets/Jzuluaga/atcosim_corpus/resolve/main/README.md | ---
dataset_info:
features:
- name: id
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: text
dtype: string
- name: segment_start_time
dtype: float32
- name: segment_end_time
dtype: float32
- name: duration
dtype: float32
splits:
- name: test
num_bytes: 471628915.76
num_examples: 1901
- name: train
num_bytes: 1934757106.88
num_examples: 7638
download_size: 0
dataset_size: 2406386022.6400003
---
# Dataset Card for "atcosim_corpus"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
severo | null | null | null | false | null | false | severo/danish-wit | 2022-11-14T11:01:24.000Z | null | false | d4bfcca433547321d83ef9718b645805087bf70d | [] | [
"language:da",
"license:cc-by-sa-4.0",
"size_categories:100K<n<1M",
"source_datasets:wikimedia/wit_base",
"task_categories:image-to-text",
"task_categories:zero-shot-image-classification",
"task_categories:feature-extraction",
"task_ids:image-captioning"
] | https://huggingface.co/datasets/severo/danish-wit/resolve/main/README.md | ---
pretty_name: Danish WIT
language:
- da
license:
- cc-by-sa-4.0
size_categories:
- 100K<n<1M
source_datasets:
- wikimedia/wit_base
task_categories:
- image-to-text
- zero-shot-image-classification
- feature-extraction
task_ids:
- image-captioning
---
# Dataset Card for Danish WIT
## Dataset Description
- **Repository:** <https://gist.github.com/saattrupdan/bb6c9c52d9f4b35258db2b2456d31224>
- **Point of Contact:** [Dan Saattrup Nielsen](mailto:dan.nielsen@alexandra.dk)
- **Size of downloaded dataset files:** 7.5 GB
- **Size of the generated dataset:** 7.8 GB
- **Total amount of disk used:** 15.3 GB
### Dataset Summary
Google presented the Wikipedia Image Text (WIT) dataset in [July
2021](https://dl.acm.org/doi/abs/10.1145/3404835.3463257), a dataset which contains
scraped images from Wikipedia along with their descriptions. WikiMedia released
WIT-Base in [September
2021](https://techblog.wikimedia.org/2021/09/09/the-wikipedia-image-caption-matching-challenge-and-a-huge-release-of-image-data-for-research/),
being a modified version of WIT where they have removed the images with empty
"reference descriptions", as well as removing images where a person's face covers more
than 10% of the image surface, along with inappropriate images that are candidate for
deletion. This dataset is the Danish portion of the WIT-Base dataset, consisting of
roughly 160,000 images with associated Danish descriptions. We release the dataset
under the [CC BY-SA 4.0 license](https://creativecommons.org/licenses/by-sa/4.0/), in
accordance with WIT-Base's [identical
license](https://huggingface.co/datasets/wikimedia/wit_base#licensing-information).
### Supported Tasks and Leaderboards
Training machine learning models for caption generation, zero-shot image classification
and text-image search are the intended tasks for this dataset. No leaderboard is active
at this point.
### Languages
The dataset is available in Danish (`da`).
## Dataset Structure
### Data Instances
- **Size of downloaded dataset files:** 7.5 GB
- **Size of the generated dataset:** 7.8 GB
- **Total amount of disk used:** 15.3 GB
An example from the `train` split looks as follows.
```
{
"image": {
"bytes": b"\xff\xd8\xff\xe0\x00\x10JFIF...",
"path": None
},
"image_url": "https://upload.wikimedia.org/wikipedia/commons/4/45/Bispen_-_inside.jpg",
"embedding": [2.8568285, 2.9562542, 0.33794892, 8.753725, ...],
"metadata_url": "http://commons.wikimedia.org/wiki/File:Bispen_-_inside.jpg",
"original_height": 3161,
"original_width": 2316,
"mime_type": "image/jpeg",
"caption_attribution_description": "Kulturhuset Bispen set indefra. Biblioteket er til venstre",
"page_url": "https://da.wikipedia.org/wiki/Bispen",
"attribution_passes_lang_id": True,
"caption_alt_text_description": None,
"caption_reference_description": "Bispen set indefra fra 1. sal, hvor ....",
"caption_title_and_reference_description": "Bispen [SEP] Bispen set indefra ...",
"context_page_description": "Bispen er navnet på det offentlige kulturhus i ...",
"context_section_description": "Bispen er navnet på det offentlige kulturhus i ...",
"hierarchical_section_title": "Bispen",
"is_main_image": True,
"page_changed_recently": True,
"page_title": "Bispen",
"section_title": None
}
```
### Data Fields
The data fields are the same among all splits.
- `image`: a `dict` feature.
- `image_url`: a `str` feature.
- `embedding`: a `list` feature.
- `metadata_url`: a `str` feature.
- `original_height`: an `int` or `NaN` feature.
- `original_width`: an `int` or `NaN` feature.
- `mime_type`: a `str` or `None` feature.
- `caption_attribution_description`: a `str` or `None` feature.
- `page_url`: a `str` feature.
- `attribution_passes_lang_id`: a `bool` or `None` feature.
- `caption_alt_text_description`: a `str` or `None` feature.
- `caption_reference_description`: a `str` or `None` feature.
- `caption_title_and_reference_description`: a `str` or `None` feature.
- `context_page_description`: a `str` or `None` feature.
- `context_section_description`: a `str` or `None` feature.
- `hierarchical_section_title`: a `str` feature.
- `is_main_image`: a `bool` or `None` feature.
- `page_changed_recently`: a `bool` or `None` feature.
- `page_title`: a `str` feature.
- `section_title`: a `str` or `None` feature.
### Data Splits
Roughly 2.60% of the WIT-Base dataset comes from the Danish Wikipedia. We have split
the resulting 168,740 samples into a training set, validation set and testing set of
the following sizes:
| split | samples |
|---------|--------:|
| train | 167,460 |
| val | 256 |
| test | 1,024 |
## Dataset Creation
### Curation Rationale
It is quite cumbersome to extract the Danish portion of the WIT-Base dataset,
especially as the dataset takes up 333 GB of disk space, so the curation of Danish-WIT
is purely to make it easier to work with the Danish portion of it.
### Source Data
The original data was collected from WikiMedia's
[WIT-Base](https://huggingface.co/datasets/wikimedia/wit_base) dataset, which in turn
comes from Google's [WIT](https://huggingface.co/datasets/google/wit) dataset.
## Additional Information
### Dataset Curators
[Dan Saattrup Nielsen](https://saattrupdan.github.io/) from the [The Alexandra
Institute](https://alexandra.dk/) curated this dataset.
### Licensing Information
The dataset is licensed under the [CC BY-SA 4.0
license](https://creativecommons.org/licenses/by-sa/4.0/).
|
minoassad | null | null | null | false | null | false | minoassad/abcdc | 2022-11-16T09:28:16.000Z | null | false | eb26a6e109ccbe16dc493559a48d0b5ed4caa6c0 | [] | [
"doi:10.57967/hf/0111",
"license:afl-3.0"
] | https://huggingface.co/datasets/minoassad/abcdc/resolve/main/README.md | ---
license: afl-3.0
---
|
siberspace | null | null | null | false | null | false | siberspace/keke2 | 2022-11-16T09:28:28.000Z | null | false | 5782fe07bd37ec0535ab0ef253a4ed7868a6c05a | [] | [] | https://huggingface.co/datasets/siberspace/keke2/resolve/main/README.md | |
ascento | null | null | null | false | null | false | ascento/dota2 | 2022-11-16T10:42:15.000Z | null | false | cf4f3f82e3c7ab23e28768c8cdd03c761b1d739e | [] | [
"license:unlicense"
] | https://huggingface.co/datasets/ascento/dota2/resolve/main/README.md | ---
license: unlicense
---
|
mboth | null | null | null | false | null | false | mboth/klassifizierung_luftBereitstellenHamburg | 2022-11-16T11:45:28.000Z | null | false | 398471a3781e97d509f0a07b18f0d58a35bac6e7 | [] | [] | https://huggingface.co/datasets/mboth/klassifizierung_luftBereitstellenHamburg/resolve/main/README.md | ---
dataset_info:
features:
- name: Beschreibung
dtype: string
- name: Name
dtype: string
- name: Datatype
dtype: string
- name: Unit
dtype: string
- name: grundfunktion
dtype: string
- name: text
dtype: string
- name: zweiteGrundfunktion
dtype: string
- name: label
dtype:
class_label:
names:
0: AbluftAllgemein
1: Abluftventilator
2: Außenluftklappe
3: Entrauchung
4: Filter
5: Fortluftklappe
6: GerätAllgemein
7: ZuluftAllgemein
8: Zuluftventilator
splits:
- name: train
num_bytes: 46996.73529411765
num_examples: 163
- name: test
num_bytes: 6054.794117647059
num_examples: 21
- name: valid
num_bytes: 5766.470588235294
num_examples: 20
download_size: 24697
dataset_size: 58818.0
---
# Dataset Card for "klassifizierung_luftBereitstellenHamburg"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
sanchit-gandhi | null | null | null | false | null | false | sanchit-gandhi/librispeech_asr_dummy | 2022-11-16T11:50:08.000Z | null | false | 4787711e7969cc35188348b2062a6bb7dc5d0cfd | [] | [] | https://huggingface.co/datasets/sanchit-gandhi/librispeech_asr_dummy/resolve/main/README.md | ---
dataset_info:
features:
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: text
dtype: string
- name: speaker_id
dtype: int64
- name: chapter_id
dtype: int64
- name: id
dtype: string
splits:
- name: train.clean.100
num_bytes: 22907577.0
num_examples: 100
- name: train.clean.360
num_bytes: 22316398.0
num_examples: 100
- name: train.other.500
num_bytes: 16540199.0
num_examples: 100
- name: validation.clean
num_bytes: 9829905.0
num_examples: 100
- name: validation.other
num_bytes: 10863978.0
num_examples: 100
- name: test.clean
num_bytes: 13519963.0
num_examples: 100
- name: test.other
num_bytes: 8360845.0
num_examples: 100
download_size: 99647113
dataset_size: 104338865.0
---
# Dataset Card for "librispeech_asr_dummy"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
loubnabnl | null | null | null | false | null | false | loubnabnl/pii_dataset_checks | 2022-11-16T12:27:22.000Z | null | false | 69acf00a54aa0472b03f8b93128effb9775c624c | [] | [] | https://huggingface.co/datasets/loubnabnl/pii_dataset_checks/resolve/main/README.md | ---
dataset_info:
features:
- name: content
dtype: string
- name: language
dtype: string
- name: license
dtype: string
- name: path
dtype: string
- name: annotation_id
dtype: string
- name: pii
dtype: string
- name: pii_modified
dtype: string
- name: id
dtype: int64
- name: secrets
dtype: string
- name: has_secrets
dtype: bool
- name: number_secrets
dtype: int64
- name: new_content
dtype: string
- name: modified
dtype: bool
- name: references
dtype: string
splits:
- name: train
num_bytes: 4424872.8
num_examples: 192
download_size: 0
dataset_size: 4424872.8
---
# Dataset Card for "pii_dataset_checks"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
kaliansh | null | null | null | false | null | false | kaliansh/BMW | 2022-11-16T15:46:52.000Z | null | false | 3140d83b17f34f313b3d2117b882b969e6115544 | [] | [
"license:unknown"
] | https://huggingface.co/datasets/kaliansh/BMW/resolve/main/README.md | ---
license: unknown
---
|
mboth | null | null | null | false | null | false | mboth/klassifizierung_waermeVerteilenHamburg | 2022-11-16T12:15:24.000Z | null | false | 9076d85d6202fdce37253cf8e2f0dbddf0d79ea8 | [] | [] | https://huggingface.co/datasets/mboth/klassifizierung_waermeVerteilenHamburg/resolve/main/README.md | ---
dataset_info:
features:
- name: Beschreibung
dtype: string
- name: Name
dtype: string
- name: Datatype
dtype: string
- name: Unit
dtype: string
- name: grundfunktion
dtype: string
- name: text
dtype: string
- name: zweiteGrundfunktion
dtype: string
- name: label
dtype:
class_label:
names:
0: Heizkreis_allgemein
1: Pumpe
2: Raum
3: Ruecklauf
4: Ventil
5: Vorlauf
6: Waermemengenzaehler
7: Warmwasserbereitung
splits:
- name: train
num_bytes: 92449.8009478673
num_examples: 337
- name: test
num_bytes: 11796.265402843603
num_examples: 43
- name: valid
num_bytes: 11521.9336492891
num_examples: 42
download_size: 36989
dataset_size: 115768.00000000001
---
# Dataset Card for "klassifizierung_waermeVerteilenHamburg"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
mboth | null | null | null | false | null | false | mboth/klassifizierung_waermeErzeugenHamburg | 2022-11-16T12:19:59.000Z | null | false | db77be0f15073e2924837bf6bd8de7df49dee046 | [] | [] | https://huggingface.co/datasets/mboth/klassifizierung_waermeErzeugenHamburg/resolve/main/README.md | ---
dataset_info:
features:
- name: Beschreibung
dtype: string
- name: Name
dtype: string
- name: Datatype
dtype: string
- name: Unit
dtype: string
- name: grundfunktion
dtype: string
- name: text
dtype: string
- name: zweiteGrundfunktion
dtype: string
- name: label
dtype:
class_label:
names:
0: Kessel
splits:
- name: train
num_bytes: 3827.25
num_examples: 12
- name: test
num_bytes: 637.875
num_examples: 2
- name: valid
num_bytes: 637.875
num_examples: 2
download_size: 14614
dataset_size: 5103.0
---
# Dataset Card for "klassifizierung_waermeErzeugenHamburg"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
loubnabnl | null | null | null | false | null | false | loubnabnl/ds_pii_redacted_checks | 2022-11-16T12:39:33.000Z | null | false | c227d3b862ac5397c305896d10190b7dacf4c8d0 | [] | [] | https://huggingface.co/datasets/loubnabnl/ds_pii_redacted_checks/resolve/main/README.md | ---
dataset_info:
features:
- name: content
dtype: string
- name: language
dtype: string
- name: license
dtype: string
- name: path
dtype: string
- name: annotation_id
dtype: string
- name: pii
dtype: string
- name: pii_modified
dtype: string
- name: id
dtype: int64
- name: secrets
dtype: string
- name: has_secrets
dtype: bool
- name: number_secrets
dtype: int64
- name: new_content
dtype: string
- name: modified
dtype: bool
- name: references
dtype: string
splits:
- name: train
num_bytes: 4424798.88
num_examples: 192
download_size: 0
dataset_size: 4424798.88
---
# Dataset Card for "ds_pii_redacted_checks"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
loubnabnl | null | null | null | false | null | false | loubnabnl/ds_pii_redacted | 2022-11-16T12:39:49.000Z | null | false | 29ac69de3586bd68b32b459112cfc28877fa2efb | [] | [] | https://huggingface.co/datasets/loubnabnl/ds_pii_redacted/resolve/main/README.md | ---
dataset_info:
features:
- name: language
dtype: string
- name: license
dtype: string
- name: path
dtype: string
- name: annotation_id
dtype: string
- name: pii
dtype: string
- name: pii_modified
dtype: string
- name: id
dtype: int64
- name: secrets
dtype: string
- name: new_content
dtype: string
splits:
- name: train
num_bytes: 3446999
num_examples: 400
download_size: 0
dataset_size: 3446999
---
# Dataset Card for "ds_pii_redacted"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
taejunkim | null | @InProceedings{huggingface:dataset,
title = {A great new dataset},
author={huggingface, Inc.
},
year={2020}
} | This new dataset is designed to solve this great NLP task and is crafted with a lot of care. | false | null | false | taejunkim/djmix | 2022-11-16T13:58:49.000Z | null | false | fae663aa268c82e2147235c5ce482ed86f3cd1d3 | [] | [] | https://huggingface.co/datasets/taejunkim/djmix/resolve/main/README.md | ---
annotations_creators: []
language: []
language_creators: []
license: []
multilinguality: []
pretty_name: The DJ Mix Dataset
size_categories: []
source_datasets: []
tags: []
task_categories: []
task_ids: []
---
# Dataset Card for [Dataset Name]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset. |
taejunkim | null | null | null | false | null | false | taejunkim/processed_demo | 2022-11-16T14:22:33.000Z | null | false | 1abb5e627925e8a6689c0aa1c44c59fbac7953dd | [] | [] | https://huggingface.co/datasets/taejunkim/processed_demo/resolve/main/README.md | ---
dataset_info:
features:
- name: id
dtype: string
- name: package_name
dtype: string
- name: review
dtype: string
- name: date
dtype: string
- name: star
dtype: int64
- name: version_id
dtype: int64
splits:
- name: test
num_bytes: 956
num_examples: 5
- name: train
num_bytes: 1508
num_examples: 5
download_size: 7783
dataset_size: 2464
---
# Dataset Card for "processed_demo"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
juancopi81 | null | null | null | false | null | false | juancopi81/binomial_3blue1brown_test | 2022-11-16T14:40:23.000Z | null | false | 575b4d50337307354318a0d21bbf4a701639d539 | [] | [] | https://huggingface.co/datasets/juancopi81/binomial_3blue1brown_test/resolve/main/README.md | ---
dataset_info:
features:
- name: CHANNEL_NAME
dtype: string
- name: URL
dtype: string
- name: TITLE
dtype: string
- name: DESCRIPTION
dtype: string
- name: TRANSCRIPTION
dtype: string
- name: SEGMENTS
dtype: string
splits:
- name: train
num_bytes: 59462
num_examples: 2
download_size: 44700
dataset_size: 59462
---
# Dataset Card for "binomial_3blue1brown_test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
polinaeterna | null | null | null | false | null | false | polinaeterna/test_push_og | 2022-11-16T15:04:14.000Z | null | false | f599c406b0b7a26af81802dfbc9054a04be30c98 | [] | [] | https://huggingface.co/datasets/polinaeterna/test_push_og/resolve/main/README.md | ---
dataset_info:
features:
- name: x
dtype: int64
- name: y
dtype: string
splits:
- name: train
num_bytes: 46
num_examples: 3
- name: test
num_bytes: 32
num_examples: 2
download_size: 1674
dataset_size: 78
---
# Dataset Card for "test_push_og"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
mboth | null | null | null | false | null | false | mboth/klassifizierung_waermeVerteilen_koeln_8000_hamburg | 2022-11-16T15:00:11.000Z | null | false | 6a4ebf6285126e347d309c685a0d6be4f782106d | [] | [] | https://huggingface.co/datasets/mboth/klassifizierung_waermeVerteilen_koeln_8000_hamburg/resolve/main/README.md | ---
dataset_info:
features:
- name: text
dtype: string
- name: Beschreibung
dtype: string
- name: Name
dtype: string
- name: Unit
dtype: string
- name: Datatype
dtype: string
- name: grundfunktion
dtype: string
- name: ZweiteGrundfunktion
dtype: string
- name: label
dtype:
class_label:
names:
0: Heizkreis_allgemein
1: Pumpe
2: Raum
3: Ruecklauf
4: Uebertrager
5: Ventil
6: Vorlauf
7: Waermemengenzaehler
8: Warmwasserbereitung
splits:
- name: train
num_bytes: 574836.8
num_examples: 2520
- name: test
num_bytes: 71854.6
num_examples: 315
- name: valid
num_bytes: 71854.6
num_examples: 315
download_size: 233209
dataset_size: 718546.0
---
# Dataset Card for "klassifizierung_waermeVerteilen_koeln_8000_hamburg"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
AmanK1202 | null | null | null | false | null | false | AmanK1202/LogoGeneration | 2022-11-16T16:25:00.000Z | null | false | 1ca34e4aefebfefc32f658afa3543126f959b464 | [] | [
"license:other"
] | https://huggingface.co/datasets/AmanK1202/LogoGeneration/resolve/main/README.md | ---
license: other
---
|
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-futin__guess-en_3-ab6376-2120068523 | 2022-11-16T16:43:43.000Z | null | false | f1c8c125bcc621b03c73bd5bccdd38579521c627 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:futin/guess"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-futin__guess-en_3-ab6376-2120068523/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- futin/guess
eval_info:
task: text_zero_shot_classification
model: bigscience/bloom-3b
metrics: []
dataset_name: futin/guess
dataset_config: en_3
dataset_split: test
col_mapping:
text: text
classes: classes
target: target
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloom-3b
* Dataset: futin/guess
* Config: en_3
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@futin](https://huggingface.co/futin) for evaluating this model. |
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-futin__guess-en_3-ab6376-2120068526 | 2022-11-16T16:25:39.000Z | null | false | d42f42526b7f46be81b6e46696be4bf516d13433 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:futin/guess"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-futin__guess-en_3-ab6376-2120068526/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- futin/guess
eval_info:
task: text_zero_shot_classification
model: bigscience/bloom-1b1
metrics: []
dataset_name: futin/guess
dataset_config: en_3
dataset_split: test
col_mapping:
text: text
classes: classes
target: target
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloom-1b1
* Dataset: futin/guess
* Config: en_3
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@futin](https://huggingface.co/futin) for evaluating this model. |
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-futin__guess-en_3-ab6376-2120068524 | 2022-11-16T17:45:44.000Z | null | false | 247e3b4ec632602bead7a90a4fd838450c69c780 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:futin/guess"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-futin__guess-en_3-ab6376-2120068524/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- futin/guess
eval_info:
task: text_zero_shot_classification
model: bigscience/bloom-7b1
metrics: []
dataset_name: futin/guess
dataset_config: en_3
dataset_split: test
col_mapping:
text: text
classes: classes
target: target
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloom-7b1
* Dataset: futin/guess
* Config: en_3
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@futin](https://huggingface.co/futin) for evaluating this model. |
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-futin__guess-en_3-ab6376-2120068525 | 2022-11-16T16:35:35.000Z | null | false | cf77295d81f17cafdac7d0152765e8b42392e296 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:futin/guess"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-futin__guess-en_3-ab6376-2120068525/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- futin/guess
eval_info:
task: text_zero_shot_classification
model: bigscience/bloom-1b7
metrics: []
dataset_name: futin/guess
dataset_config: en_3
dataset_split: test
col_mapping:
text: text
classes: classes
target: target
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloom-1b7
* Dataset: futin/guess
* Config: en_3
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@futin](https://huggingface.co/futin) for evaluating this model. |
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-futin__guess-en_3-ab6376-2120068527 | 2022-11-16T16:31:49.000Z | null | false | df149fbf9bcca94959d9177c4e99526172e530bf | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:futin/guess"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-futin__guess-en_3-ab6376-2120068527/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- futin/guess
eval_info:
task: text_zero_shot_classification
model: bigscience/bloom-560m
metrics: []
dataset_name: futin/guess
dataset_config: en_3
dataset_split: test
col_mapping:
text: text
classes: classes
target: target
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloom-560m
* Dataset: futin/guess
* Config: en_3
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@futin](https://huggingface.co/futin) for evaluating this model. |
tofighi | null | null | null | false | null | false | tofighi/bitcoin | 2022-11-16T16:40:59.000Z | null | false | 3d7fb7d0c4be6a2f1c2772cb625f9d941273f3a3 | [] | [
"license:apache-2.0"
] | https://huggingface.co/datasets/tofighi/bitcoin/resolve/main/README.md | ---
license: apache-2.0
---
|
hungngocphat01 | null | null | null | false | null | false | hungngocphat01/zalo-ai-train | 2022-11-16T16:53:52.000Z | null | false | b1c6fa2ca278d7b8d33ca47d0f7258f3b27aea55 | [] | [] | https://huggingface.co/datasets/hungngocphat01/zalo-ai-train/resolve/main/README.md | ---
dataset_info:
features:
- name: audio
dtype: audio
- name: text
dtype: string
splits:
- name: train
num_bytes: 642303694.4
num_examples: 9220
download_size: 641985253
dataset_size: 642303694.4
---
# Dataset Card for "zalo-ai-train"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Den4ikAI | null | null | null | false | null | false | Den4ikAI/mailru-QA-old | 2022-11-16T18:01:57.000Z | null | false | 456f0334dd95c31b2b458fff77626e024e87af03 | [] | [
"license:mit"
] | https://huggingface.co/datasets/Den4ikAI/mailru-QA-old/resolve/main/README.md | ---
license: mit
---
|
dlwh | null | @ONLINE {wikidump,
author = {Wikimedia Foundation},
title = {Wikimedia Downloads},
url = {https://dumps.wikimedia.org}
} | Wikipedia dataset containing cleaned articles of all languages.
The datasets are built from the Wikipedia dump
(https://dumps.wikimedia.org/) with one split per language. Each example
contains the content of one full Wikipedia article with cleaning to strip
markdown and unwanted sections (references, etc.). | false | null | false | dlwh/eu_wikipedias | 2022-11-16T18:12:18.000Z | null | false | b2a751e24770039ef372636cd3747c699ff88f5e | [] | [
"annotations_creators:no-annotation",
"language_creators:crowdsourced",
"license:cc-by-sa-3.0",
"license:gfdl",
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"source_datasets:original",
"multilinguality:multilingual",
"size_categories:n<1K",
"size_categories:1K<n<10K",
"size_categories:10K<n<100K",
"size_categories:100K<n<1M",
"size_categories:1M<n<10M",
"language:bg",
"language:cs",
"language:da",
"language:de",
"language:el",
"language:en",
"language:es",
"language:et",
"language:fi",
"language:fr",
"language:ga",
"language:hr",
"language:hu",
"language:it",
"language:lt",
"language:lv",
"language:mt",
"language:nl",
"language:pl",
"language:pt",
"language:ro",
"language:sk",
"language:sl",
"language:sv"
] | https://huggingface.co/datasets/dlwh/eu_wikipedias/resolve/main/README.md | ---
annotations_creators:
- no-annotation
language_creators:
- crowdsourced
pretty_name: Wikipedia
paperswithcode_id: null
license:
- cc-by-sa-3.0
- gfdl
task_categories:
- text-generation
- fill-mask
task_ids:
- language-modeling
- masked-language-modeling
source_datasets:
- original
multilinguality:
- multilingual
size_categories:
- n<1K
- 1K<n<10K
- 10K<n<100K
- 100K<n<1M
- 1M<n<10M
language:
- bg
- cs
- da
- de
- el
- en
- es
- et
- fi
- fr
- ga
- hr
- hu
- it
- lt
- lv
- mt
- nl
- pl
- pt
- ro
- sk
- sl
- sv
---
# Dataset Card for Wikipedia
This repo is a wrapper around [olm/wikipedia](https://huggingface.co/datasets/olm/wikipedia) that just concatenates data from the EU languages.
Please refer to it for a complete data card.
The EU languages we include are:
- bg
- cs
- da
- de
- el
- en
- es
- et
- fi
- fr
- ga
- hr
- hu
- it
- lt
- lv
- mt
- nl
- pl
- pt
- ro
- sk
- sl
- sv
As with `olm/wikipedia` you will need to install a few dependencies:
```
pip install mwparserfromhell==0.6.4 multiprocess==0.70.13
```
```python
from datasets import load_dataset
load_dataset("dlwh/eu_wikipedias", date="20221101")
```
Please refer to the original olm/wikipedia for a complete data card.
|
ithieund | null | null | null | false | null | false | ithieund/VietNews-Abs-Sum | 2022-11-16T20:26:33.000Z | null | false | 5ea617ef5250ee9d421c177417029f0be841db63 | [] | [] | https://huggingface.co/datasets/ithieund/VietNews-Abs-Sum/resolve/main/README.md | # VietNews-Abs-Sum
A dataset for Vietnamese Abstractive Summarization task.
It includes all articles from Vietnews (VNDS) dataset which was released by Van-Hau Nguyen et al.
The articles were collected from tuoitre.vn, vnexpress.net, and nguoiduatin.vn online newspaper by the authors.
# Introduction
This dataset was extracted from Train/Val/Test split of Vietnews dataset. All files from *test_tokenized*, *train_tokenized* and *val_tokenized* directories are fetched and preprocessed with punctuation normalization. The subsets then are stored in the *raw* director with 3 files *train.tsv*, *valid.tsv*, and *test.tsv* accordingly. These files will be considered as the original raw dataset as nothing changes except the punctuation normalization.
As pointed out in *BARTpho: Pre-trained Sequence-to-Sequence Models for Vietnamese*, there are lots of duplicated samples across subsets. Therefore, we do another preprocessing process to remove all the duplicated samples. The process includes the following steps:
- First, remove all duplicates from each subset
- Second, merge all subsets into 1 set with the following order: test + val + train
- Finally, remove all duplicates from that merged set and then split out into 3 new subsets
The final subsets are the same to the orignal subsets but all duplicates were removed. Each subset now has total samples as follows:
- train_no_dups.tsv: 99134 samples
- valid_no_dups.tsv: 22184 samples
- test_no_dups.tsv: 22498 samples
Totally, we have 99134 + 22184 + 22498 = 143816 samples after filtering!
Note that this result is not the same as the number of samples reported in BARTpho paper, but there is no duplicate inside each subset or across subsets anymore.
These filtered subsets are also exported into JSONLINE format to support future training script that requires this data format.
# Directory structure
- raw: contains 3 raw subset files fetched from Vietnews directories
- train.tsv
- val.tsv
- test.tsv
- processed: contains duplicates filtered subsets
- test.tsv
- train.tsv
- valid.tsv
- test.jsonl
- train.jsonl
- valid.jsonl
- [and other variants]
# Credits
- Special thanks to Vietnews (VNDS) authors: https://github.com/ThanhChinhBK/vietnews
|
Artmann | null | null | null | false | null | false | Artmann/coauthor | 2022-11-16T18:45:10.000Z | null | false | f74aeef8979f2227041e35811b1a774270e7b9f6 | [] | [
"license:mit"
] | https://huggingface.co/datasets/Artmann/coauthor/resolve/main/README.md | ---
license: mit
---
|
osanseviero | null | null | null | false | null | false | osanseviero/karpathy-nn | 2022-11-16T18:48:12.000Z | null | false | f85b08fab9e4a7f58abbba8cd240588ca5909961 | [] | [] | https://huggingface.co/datasets/osanseviero/karpathy-nn/resolve/main/README.md | Invalid username or password. |
juancopi81 | null | null | null | false | null | false | juancopi81/testnnk | 2022-11-16T19:33:22.000Z | null | false | 6eca9828d803494f43b9623a6e952c37a595778d | [] | [] | https://huggingface.co/datasets/juancopi81/testnnk/resolve/main/README.md | ---
dataset_info:
features:
- name: CHANNEL_NAME
dtype: string
- name: URL
dtype: string
- name: TITLE
dtype: string
- name: DESCRIPTION
dtype: string
- name: TRANSCRIPTION
dtype: string
- name: SEGMENTS
dtype: string
splits:
- name: train
num_bytes: 382632
num_examples: 1
download_size: 176707
dataset_size: 382632
---
# Dataset Card for "testnnk"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
salmonhumorous | null | null | null | false | null | false | salmonhumorous/logo-blip-caption | 2022-11-16T19:35:54.000Z | null | false | a99195d7d7197eb9547133cea5046fb81b19a4aa | [] | [] | https://huggingface.co/datasets/salmonhumorous/logo-blip-caption/resolve/main/README.md | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 24808769.89
num_examples: 1435
download_size: 24242906
dataset_size: 24808769.89
---
# Dataset Card for "logo-blip"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Norod78 | null | null | null | false | null | false | Norod78/ChristmasClaymation-blip-captions | 2022-11-16T20:18:18.000Z | null | false | 55de12c96f4bc4cc14351b3660e009c8c5186088 | [] | [
"size_categories:n<1K",
"task_categories:text-to-image",
"license:cc-by-nc-sa-4.0",
"annotations_creators:machine-generated",
"language:en",
"language_creators:other",
"multilinguality:monolingual"
] | https://huggingface.co/datasets/Norod78/ChristmasClaymation-blip-captions/resolve/main/README.md | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 128397390.0
num_examples: 401
download_size: 125229613
dataset_size: 128397390.0
pretty_name: 'Christmas claymation style, BLIP captions'
size_categories:
- n<1K
tags: []
task_categories:
- text-to-image
license: cc-by-nc-sa-4.0
annotations_creators:
- machine-generated
language:
- en
language_creators:
- other
multilinguality:
- monolingual
---
# Dataset Card for "ChristmasClaymation-blip-captions"
All captions end with the suffix ", Christmas claymation style" |
ithieund | null | null | null | false | null | false | ithieund/viWikiHow-Abs-Sum | 2022-11-16T20:37:52.000Z | null | false | d683ca4cbc5b47b25f244e1463d0e39da1f4e802 | [] | [
"license:mit"
] | https://huggingface.co/datasets/ithieund/viWikiHow-Abs-Sum/resolve/main/README.md | # viWikiHow-Abs-Sum
A dataset for Vietnamese Abstractive Summarization task.
It includes all Vietnamese posts from WikiHow which was released in WikiLingua dataset.
# Introduction
This dataset was extracted from Train/Test split of WikiLingua dataset. As the target language is Vietnamese, we remove all other files, just keep train.\*.vi, test.\*.vi, and val.\*.vi for Vietnamese Abstractive Summarization task. The raw files then are stored in the *raw* director and after that, we run the python script to generate ready-to-use data files in TSV and JSONLINE formats which are stored in *processed* directory to be easily used for future training scripts.
# Directory structure
- raw: contains raw text files from WikiLingua
- test.src.vi
- test.tgt.vi
- train.src.vi
- train.tgt.vi
- val.src.vi
- val.tgt.vi
- processed: contains generated TSV and JSONLINE files
- test.tsv
- train.tsv
- valid.tsv
- test.jsonl
- train.jsonl
- valid.jsonl
- [and other variants]
# Credits
- Special thanks to WikiLingua authors: https://github.com/esdurmus/Wikilingua
- Article provided by <a href="https://www.wikihow.com/Main-Page" target="_blank">wikiHow</a>, a wiki that is building the world's largest and highest quality how-to manual. Please edit this article and find author credits at the original wikiHow article on How to Tie a Tie. Content on wikiHow can be shared under a <a href="http://creativecommons.org/licenses/by-nc-sa/3.0/" target="_blank">Creative Commons License</a>.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.