author stringlengths 2 29 ⌀ | cardData null | citation stringlengths 0 9.58k ⌀ | description stringlengths 0 5.93k ⌀ | disabled bool 1 class | downloads float64 1 1M ⌀ | gated bool 2 classes | id stringlengths 2 108 | lastModified stringlengths 24 24 | paperswithcode_id stringlengths 2 45 ⌀ | private bool 2 classes | sha stringlengths 40 40 | siblings list | tags list | readme_url stringlengths 57 163 | readme stringlengths 0 977k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
genesisqu | null | null | null | false | null | false | genesisqu/fake-real-news | 2022-10-17T18:06:58.000Z | null | false | 9cca94a438905a40883b71c3b5f91c9673b6eec9 | [] | [
"license:bsd"
] | https://huggingface.co/datasets/genesisqu/fake-real-news/resolve/main/README.md | ---
license: bsd
---
|
SamHernandez | null | null | null | false | null | false | SamHernandez/computer-style | 2022-10-17T18:17:49.000Z | null | false | 2058f015aaded08f35656df841a841f829fc10d8 | [] | [
"license:afl-3.0"
] | https://huggingface.co/datasets/SamHernandez/computer-style/resolve/main/README.md | ---
license: afl-3.0
---
|
awacke1 | null | null | null | false | null | false | awacke1/MedNorm2SnomedCT2UMLS | 2022-10-29T12:44:37.000Z | null | false | a6b78c659cbcaf6d15d8ba66e90fb5dc8af6d34d | [] | [
"license:mit"
] | https://huggingface.co/datasets/awacke1/MedNorm2SnomedCT2UMLS/resolve/main/README.md | ---
license: mit
---
MedNorm2SnomedCT2UMLS |
ivanfdzm | null | null | null | false | null | false | ivanfdzm/Arq-Style | 2022-10-17T21:40:50.000Z | null | false | 275329715cd6047b261b5c25b4373f8a6d68b659 | [] | [
"license:afl-3.0"
] | https://huggingface.co/datasets/ivanfdzm/Arq-Style/resolve/main/README.md | ---
license: afl-3.0
---
|
moisestech | null | null | null | false | null | false | moisestech/beyond-money | 2022-10-18T02:17:59.000Z | null | false | e0124a3be3482b1196e6b716e19b4fb903214da4 | [] | [
"license:unknown"
] | https://huggingface.co/datasets/moisestech/beyond-money/resolve/main/README.md | ---
license: unknown
---
|
BioBlast3r | null | null | null | false | null | false | BioBlast3r/Train-01-Maxx | 2022-10-18T05:13:35.000Z | null | false | 98d16bdab800245e1e81a1bd4d1506c67a702736 | [] | [
"license:unknown"
] | https://huggingface.co/datasets/BioBlast3r/Train-01-Maxx/resolve/main/README.md | ---
license: unknown
---
|
yirmibesogluz | null | null | null | false | null | false | yirmibesogluz/Code-switching-tr-en | 2022-10-18T06:21:25.000Z | null | false | 44ad185079e20dc8f437d7dfd4bdff2168640097 | [] | [
"license:mit"
] | https://huggingface.co/datasets/yirmibesogluz/Code-switching-tr-en/resolve/main/README.md | ---
license: mit
---
|
Chiba1sonny | null | null | null | false | null | false | Chiba1sonny/RMD | 2022-10-18T06:31:40.000Z | null | false | 2651add0c4bc0b204470551eae0a1f531004c25b | [] | [
"license:openrail"
] | https://huggingface.co/datasets/Chiba1sonny/RMD/resolve/main/README.md | ---
license: openrail
---
|
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-phpthinh__ex1-all-65db7c-1796062129 | 2022-10-18T07:27:01.000Z | null | false | b8cf42f0d1cf99a04313dd8d7d77bb0fb1d42a19 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:phpthinh/ex1"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-phpthinh__ex1-all-65db7c-1796062129/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- phpthinh/ex1
eval_info:
task: text_zero_shot_classification
model: bigscience/bloom-560m
metrics: []
dataset_name: phpthinh/ex1
dataset_config: all
dataset_split: test
col_mapping:
text: text
classes: classes
target: target
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloom-560m
* Dataset: phpthinh/ex1
* Config: all
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@phpthinh](https://huggingface.co/phpthinh) for evaluating this model. |
autoevaluate | null | null | null | false | 2 | false | autoevaluate/autoeval-eval-phpthinh__ex2-all-93c06b-1796162130 | 2022-10-18T07:26:43.000Z | null | false | 7969c200d7b0eec370bce6870897ef06d678248e | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:phpthinh/ex2"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-phpthinh__ex2-all-93c06b-1796162130/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- phpthinh/ex2
eval_info:
task: text_zero_shot_classification
model: bigscience/bloom-560m
metrics: []
dataset_name: phpthinh/ex2
dataset_config: all
dataset_split: test
col_mapping:
text: text
classes: classes
target: target
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloom-560m
* Dataset: phpthinh/ex2
* Config: all
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@phpthinh](https://huggingface.co/phpthinh) for evaluating this model. |
nayan06 | null | null | null | false | 36 | false | nayan06/conversion1.0 | 2022-10-18T10:40:06.000Z | null | false | 8d80667af835aa457c6ae62f3e4331dc3e299461 | [] | [] | https://huggingface.co/datasets/nayan06/conversion1.0/resolve/main/README.md | ---
train-eval-index:
- config: default
task: text-classification
task_id: multi_class_classification
splits:
eval_split: test
col_mapping:
text: text
label: target
--- |
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-emotion-default-1b690b-1797662163 | 2022-10-18T06:55:22.000Z | null | false | 40e52ccedb5de97ee785848924f08544f3ca0969 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:emotion"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-emotion-default-1b690b-1797662163/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- emotion
eval_info:
task: multi_class_classification
model: Emanuel/bertweet-emotion-base
metrics: []
dataset_name: emotion
dataset_config: default
dataset_split: test
col_mapping:
text: text
target: label
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Multi-class Text Classification
* Model: Emanuel/bertweet-emotion-base
* Dataset: emotion
* Config: default
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@nayan06](https://huggingface.co/nayan06) for evaluating this model. |
Makokokoko | null | null | null | false | null | false | Makokokoko/aaaaa | 2022-10-18T07:38:02.000Z | null | false | 645a6fcac4e0a587b428e545ccd8d68d71d847b3 | [] | [] | https://huggingface.co/datasets/Makokokoko/aaaaa/resolve/main/README.md | pip install diffusers transformers nvidia-ml-py3 ftfy pytorch pillow
|
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-eval-phpthinh__ex3-all-630c04-1799362235 | 2022-10-18T09:07:43.000Z | null | false | eee375a1b72660d3bd2c5b468d18615279f6d992 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:phpthinh/ex3"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-phpthinh__ex3-all-630c04-1799362235/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- phpthinh/ex3
eval_info:
task: text_zero_shot_classification
model: bigscience/bloom-560m
metrics: []
dataset_name: phpthinh/ex3
dataset_config: all
dataset_split: test
col_mapping:
text: text
classes: classes
target: target
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloom-560m
* Dataset: phpthinh/ex3
* Config: all
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@phpthinh](https://huggingface.co/phpthinh) for evaluating this model. |
bdvysg | null | null | null | false | null | false | bdvysg/test | 2022-10-18T08:22:27.000Z | null | false | d686b6b41eaf45992b5f30ef384e42071168f021 | [] | [
"license:openrail"
] | https://huggingface.co/datasets/bdvysg/test/resolve/main/README.md | ---
license: openrail
---
|
GabeHD | null | null | null | false | 14 | false | GabeHD/pokemon-type-captions | 2022-10-23T04:40:59.000Z | null | false | b45637b6ba33e8c7709e20385ac1c8dbc0cdec1f | [] | [] | https://huggingface.co/datasets/GabeHD/pokemon-type-captions/resolve/main/README.md | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 19372532.0
num_examples: 898
download_size: 0
dataset_size: 19372532.0
---
# Dataset Card for Pokémon type captions
Contains official artwork and type-specific caption for Pokémon #1-898 (Bulbasaur-Calyrex).
Each Pokémon is represented once by the default form from [PokéAPI](https://pokeapi.co/)
Each row contains `image` and `text` keys:
- `image` is a 475x475 PIL jpg of the Pokémon's official artwork.
- `text` is a label describing the Pokémon by its type(s)
## Attributions
_Images and typing information pulled from [PokéAPI](https://pokeapi.co/)_
_Based on the [Lambda Labs Pokémon Blip Captions Dataset](https://huggingface.co/datasets/lambdalabs/pokemon-blip-captions)_
|
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-conceptual_captions-unlabeled-ccbde0-1800162251 | 2022-10-18T23:14:21.000Z | null | false | a9ce511fb3bfa4898bd7dcebe6f23e08211243a8 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:conceptual_captions"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-conceptual_captions-unlabeled-ccbde0-1800162251/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- conceptual_captions
eval_info:
task: summarization
model: 0ys/mt5-small-finetuned-amazon-en-es
metrics: ['accuracy']
dataset_name: conceptual_captions
dataset_config: unlabeled
dataset_split: train
col_mapping:
text: image_url
target: caption
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: 0ys/mt5-small-finetuned-amazon-en-es
* Dataset: conceptual_captions
* Config: unlabeled
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@DonaldDaz](https://huggingface.co/DonaldDaz) for evaluating this model. |
thisisHJLee | null | null | null | false | 4 | false | thisisHJLee/ksponspeech | 2022-10-18T09:13:57.000Z | null | false | c74f1a871ce1c3f7f05f84e55adf850f05591311 | [] | [] | https://huggingface.co/datasets/thisisHJLee/ksponspeech/resolve/main/README.md | ---
dataset_info:
features:
- name: audio
dtype: audio
- name: text
dtype: string
splits:
- name: train
num_bytes: 5436798811.006
num_examples: 27093
download_size: 4321258116
dataset_size: 5436798811.006
---
# Dataset Card for "ksponspeech"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
guillaume-chervet | null | null | null | false | null | false | guillaume-chervet/test | 2022-10-18T09:31:46.000Z | null | false | b0a9932fc04d28f16f0af3bc21233ce7e8a164bf | [] | [
"license:mit"
] | https://huggingface.co/datasets/guillaume-chervet/test/resolve/main/README.md | ---
license: mit
---
|
gigant | null | null | null | false | 63 | false | gigant/ted_descriptions | 2022-10-18T11:16:29.000Z | null | false | 3d0d0a2113f3a35a0163f68d96c6307d641f1a5a | [] | [
"annotations_creators:no-annotation",
"language:en",
"language_creators:found",
"license:cc-by-4.0",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"task_categories:text-generation",
"task_ids:language-modeling"
] | https://huggingface.co/datasets/gigant/ted_descriptions/resolve/main/README.md | ---
annotations_creators:
- no-annotation
language:
- en
language_creators:
- found
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: TED descriptions
size_categories:
- 1K<n<10K
source_datasets:
- original
tags: []
task_categories:
- text-generation
task_ids:
- language-modeling
dataset_info:
features:
- name: url
dtype: string
- name: descr
dtype: string
splits:
- name: train
num_bytes: 2617778
num_examples: 5705
download_size: 1672988
dataset_size: 2617778
---
# Dataset Card for TED descriptions
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
elenanereiss | null | @misc{https://doi.org/10.48550/arxiv.2003.13016,
doi = {10.48550/ARXIV.2003.13016},
url = {https://arxiv.org/abs/2003.13016},
author = {Leitner, Elena and Rehm, Georg and Moreno-Schneider, Julián},
keywords = {Computation and Language (cs.CL), Information Retrieval (cs.IR), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {A Dataset of German Legal Documents for Named Entity Recognition},
publisher = {arXiv},
year = {2020},
copyright = {arXiv.org perpetual, non-exclusive license}
} | A dataset of Legal Documents from German federal court decisions for Named Entity Recognition. The dataset is human-annotated with 19 fine-grained entity classes. The dataset consists of approx. 67,000 sentences and contains 54,000 annotated entities. | false | 119 | false | elenanereiss/german-ler | 2022-10-26T08:32:17.000Z | dataset-of-legal-documents | false | 9d7a3960c7b4b1f6efb1e97bd4d469a217b46930 | [] | [
"arxiv:2003.13016",
"doi:10.57967/hf/0046",
"annotations_creators:expert-generated",
"language_creators:found",
"language:de",
"license:cc-by-4.0",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"source_datasets:original",
"tags:ner, named entity recognition, legal ner, legal texts, la... | https://huggingface.co/datasets/elenanereiss/german-ler/resolve/main/README.md | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- de
license:
- cc-by-4.0
multilinguality:
- monolingual
paperswithcode_id: dataset-of-legal-documents
pretty_name: German Named Entity Recognition in Legal Documents
size_categories:
- 1M<n<10M
source_datasets:
- original
tags:
- ner, named entity recognition, legal ner, legal texts, label classification
task_categories:
- token-classification
task_ids:
- named-entity-recognition
train-eval-index:
- config: conll2003
task: token-classification
task_id: entity_extraction
splits:
train_split: train
eval_split: test
col_mapping:
tokens: tokens
ner_tags: tags
---
# Dataset Card for "German LER"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://github.com/elenanereiss/Legal-Entity-Recognition](https://github.com/elenanereiss/Legal-Entity-Recognition)
- **Paper:** [https://arxiv.org/pdf/2003.13016v1.pdf](https://arxiv.org/pdf/2003.13016v1.pdf)
- **Point of Contact:** [elena.leitner@dfki.de](elena.leitner@dfki.de)
### Dataset Summary
A dataset of Legal Documents from German federal court decisions for Named Entity Recognition. The dataset is human-annotated with 19 fine-grained entity classes. The dataset consists of approx. 67,000 sentences and contains 54,000 annotated entities. NER tags use the `BIO` tagging scheme.
The dataset includes two different versions of annotations, one with a set of 19 fine-grained semantic classes (`ner_tags`) and another one with a set of 7 coarse-grained classes (`ner_coarse_tags`). There are 53,632 annotated entities in total, the majority of which (74.34 %) are legal entities, the others are person, location and organization (25.66 %).

For more details see [https://arxiv.org/pdf/2003.13016v1.pdf](https://arxiv.org/pdf/2003.13016v1.pdf).
### Supported Tasks and Leaderboards
- **Tasks:** Named Entity Recognition
- **Leaderboards:**
### Languages
German
## Dataset Structure
### Data Instances
```python
{
'id': '1',
'tokens': ['Eine', 'solchermaßen', 'verzögerte', 'oder', 'bewusst', 'eingesetzte', 'Verkettung', 'sachgrundloser', 'Befristungen', 'schließt', '§', '14', 'Abs.', '2', 'Satz', '2', 'TzBfG', 'aus', '.'],
'ner_tags': [38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 3, 22, 22, 22, 22, 22, 22, 38, 38],
'ner_coarse_tags': [14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 2, 9, 9, 9, 9, 9, 9, 14, 14]
}
```
### Data Fields
```python
{
'id': Value(dtype='string', id=None),
'tokens': Sequence(feature=Value(dtype='string', id=None),
length=-1, id=None),
'ner_tags': Sequence(feature=ClassLabel(num_classes=39,
names=['B-AN',
'B-EUN',
'B-GRT',
'B-GS',
'B-INN',
'B-LD',
'B-LDS',
'B-LIT',
'B-MRK',
'B-ORG',
'B-PER',
'B-RR',
'B-RS',
'B-ST',
'B-STR',
'B-UN',
'B-VO',
'B-VS',
'B-VT',
'I-AN',
'I-EUN',
'I-GRT',
'I-GS',
'I-INN',
'I-LD',
'I-LDS',
'I-LIT',
'I-MRK',
'I-ORG',
'I-PER',
'I-RR',
'I-RS',
'I-ST',
'I-STR',
'I-UN',
'I-VO',
'I-VS',
'I-VT',
'O'],
id=None),
length=-1,
id=None),
'ner_coarse_tags': Sequence(feature=ClassLabel(num_classes=15,
names=['B-LIT',
'B-LOC',
'B-NRM',
'B-ORG',
'B-PER',
'B-REG',
'B-RS',
'I-LIT',
'I-LOC',
'I-NRM',
'I-ORG',
'I-PER',
'I-REG',
'I-RS',
'O'],
id=None),
length=-1,
id=None)
}
```
### Data Splits
| | train | validation | test |
|-------------------------|------:|-----------:|-----:|
| Input Sentences | 53384 | 6666 | 6673 |
## Dataset Creation
### Curation Rationale
Documents in the legal domain contain multiple references to named entities, especially domain-specific named entities, i. e., jurisdictions, legal institutions, etc. Legal documents are unique and differ greatly from newspaper texts. On the one hand, the occurrence of general-domain named entities is relatively rare. On the other hand, in concrete applications, crucial domain-specific entities need to be identified in a reliable way, such as designations of legal norms and references to other legal documents (laws, ordinances, regulations, decisions, etc.). Most NER solutions operate in the general or news domain, which makes them inapplicable to the analysis of legal documents. Accordingly, there is a great need for an NER-annotated dataset consisting of legal documents, including the corresponding development of a typology of semantic concepts and uniform annotation guidelines.
### Source Data
Court decisions from 2017 and 2018 were selected for the dataset, published online by the [Federal Ministry of Justice and Consumer Protection](http://www.rechtsprechung-im-internet.de). The documents originate from seven federal courts: Federal Labour Court (BAG), Federal Fiscal Court (BFH), Federal Court of Justice (BGH), Federal Patent Court (BPatG), Federal Social Court (BSG), Federal Constitutional Court (BVerfG) and Federal Administrative Court (BVerwG).
#### Initial Data Collection and Normalization
From the table of [contents](http://www.rechtsprechung-im-internet.de/rii-toc.xml), 107 documents from each court were selected (see Table 1). The data was collected from the XML documents, i. e., it was extracted from the XML elements `Mitwirkung, Titelzeile, Leitsatz, Tenor, Tatbestand, Entscheidungsgründe, Gründen, abweichende Meinung, and sonstiger Titel`. The metadata at the beginning of the documents (name of court, date of decision, file number, European Case Law Identifier, document type, laws) and those that belonged to previous legal proceedings was deleted. Paragraph numbers were removed.
The extracted data was split into sentences, tokenised using [SoMaJo](https://github.com/tsproisl/SoMaJo) and manually annotated in [WebAnno](https://webanno.github.io/webanno/).
#### Who are the source language producers?
The Federal Ministry of Justice and the Federal Office of Justice provide selected decisions. Court decisions were produced by humans.
### Annotations
#### Annotation process
For more details see [annotation guidelines](https://github.com/elenanereiss/Legal-Entity-Recognition/blob/master/docs/Annotationsrichtlinien.pdf) (in German).
<!-- #### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)-->
### Personal and Sensitive Information
A fundamental characteristic of the published decisions is that all personal information have been anonymised for privacy reasons. This affects the classes person, location and organization.
<!-- ## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)-->
### Licensing Information
[CC BY-SA 4.0 license](https://creativecommons.org/licenses/by-sa/4.0/)
### Citation Information
```
@misc{https://doi.org/10.48550/arxiv.2003.13016,
doi = {10.48550/ARXIV.2003.13016},
url = {https://arxiv.org/abs/2003.13016},
author = {Leitner, Elena and Rehm, Georg and Moreno-Schneider, Julián},
keywords = {Computation and Language (cs.CL), Information Retrieval (cs.IR), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {A Dataset of German Legal Documents for Named Entity Recognition},
publisher = {arXiv},
year = {2020},
copyright = {arXiv.org perpetual, non-exclusive license}
}
```
### Contributions
|
Maxstan | null | null | null | false | null | false | Maxstan/srl_for_emotions_russian | 2022-10-18T12:39:32.000Z | null | false | e11292434fb9e789f80041723bdf2e61cfdc7e0b | [] | [
"license:cc-by-nc-4.0"
] | https://huggingface.co/datasets/Maxstan/srl_for_emotions_russian/resolve/main/README.md | ---
license: cc-by-nc-4.0
---
SRL annotated corpora for extracting experiencer and cause of emotions |
Sombrax | null | null | null | false | null | false | Sombrax/Cat | 2022-10-18T12:31:59.000Z | null | false | 72be14bbf250a379c8cef9e1f0ea6031b84610f7 | [] | [
"license:openrail"
] | https://huggingface.co/datasets/Sombrax/Cat/resolve/main/README.md | ---
license: openrail
---
|
loubnabnl | null | null | null | false | null | false | loubnabnl/names_detection_codebert | 2022-10-25T13:58:12.000Z | null | false | 89a3efaefc486db1423676ae07e82c000ece49ff | [] | [] | https://huggingface.co/datasets/loubnabnl/names_detection_codebert/resolve/main/README.md | ---
dataset_info:
features:
- name: path
dtype: string
- name: size
dtype: string
- name: content
dtype: string
- name: license
dtype: string
- name: names
list:
- name: end
dtype: int64
- name: entity_group
dtype: string
- name: score
dtype: float32
- name: start
dtype: int64
- name: word
dtype: string
- name: person_names
sequence: string
splits:
- name: train
num_bytes: 867667
num_examples: 100
download_size: 325084
dataset_size: 867667
---
# Dataset Card for "names_detection_codebert"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
loubnabnl | null | null | null | false | null | false | loubnabnl/names_detection_bert_ner | 2022-10-25T13:09:17.000Z | null | false | 07a4ead4cdcdbbfe447d32751550b2c17bedd365 | [] | [] | https://huggingface.co/datasets/loubnabnl/names_detection_bert_ner/resolve/main/README.md | ---
dataset_info:
pinned: True
features:
- name: path
dtype: string
- name: size
dtype: string
- name: content
dtype: string
- name: license
dtype: string
- name: names
list:
- name: end
dtype: int64
- name: entity_group
dtype: string
- name: score
dtype: float32
- name: start
dtype: int64
- name: word
dtype: string
- name: person_names
sequence: string
splits:
- name: train
num_bytes: 868158
num_examples: 100
download_size: 325680
dataset_size: 868158
---
# Dataset Card for "names_detection_bert_ner"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
gigant | null | null | null | false | 2 | false | gigant/tib_urls | 2022-10-18T12:38:54.000Z | null | false | 6b3af9306db06190d492fffa7d330cc0b0de205e | [] | [] | https://huggingface.co/datasets/gigant/tib_urls/resolve/main/README.md | ---
dataset_info:
features:
- name: title
dtype: string
- name: href
dtype: string
- name: description
dtype: 'null'
- name: url_vid
dtype: string
splits:
- name: train
num_bytes: 5590046
num_examples: 22091
download_size: 3220099
dataset_size: 5590046
---
# Dataset Card for "tib_urls"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Maxstan | null | null | null | false | null | false | Maxstan/russian_youtube_comments_political_and_nonpolitical | 2022-10-18T12:57:07.000Z | null | false | 1e7a79e5c2bcd57dc6324cb159732771229bc89a | [] | [
"license:cc-by-nc-4.0"
] | https://huggingface.co/datasets/Maxstan/russian_youtube_comments_political_and_nonpolitical/resolve/main/README.md | ---
license: cc-by-nc-4.0
---
The data contains comments from political and nonpolitical Russian-speaking YouTube channels.
Date interval: 1 year between April 30, 2020, and April 30, 2021 |
odepraz | null | null | null | false | 2 | false | odepraz/rvl_cdip_1percentofdata | 2022-10-18T13:00:07.000Z | null | false | 6e31833b336cf58776371b31804dc3285108347c | [] | [
"license:unknown"
] | https://huggingface.co/datasets/odepraz/rvl_cdip_1percentofdata/resolve/main/README.md | ---
license: unknown
---
|
ellabettison | null | null | null | false | 64 | false | ellabettison/processed_roberta_dataset_padded_med | 2022-10-21T20:35:02.000Z | null | false | 493f29efea56cdaddb900326431581f52e967a01 | [] | [] | https://huggingface.co/datasets/ellabettison/processed_roberta_dataset_padded_med/resolve/main/README.md | ---
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: special_tokens_mask
sequence: int8
splits:
- name: test
num_bytes: 27003.024
num_examples: 250
- name: train
num_bytes: 26976020.976
num_examples: 249750
download_size: 4381282
dataset_size: 27003024.0
---
# Dataset Card for "processed_roberta_dataset_padded_med"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Kentaline | null | null | null | false | 1 | false | Kentaline/hf-dataset-study | 2022-10-18T14:35:42.000Z | null | false | 3cc76d8c9536209e9772338d9567b8ae2a767d79 | [] | [
"license:other"
] | https://huggingface.co/datasets/Kentaline/hf-dataset-study/resolve/main/README.md | ---
license: other
---
---
annotations_creators:
- crowdsourced
language:
- ja
language_creators:
- crowdsourced
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
paperswithcode_id: squad
pretty_name: squad-ja
size_categories:
- 100K<n<1M
source_datasets:
- original
tags: []
task_categories:
- question-answering
task_ids:
- open-domain-qa
- extractive-qa
train-eval-index:
- col_mapping:
answers:
answer_start: answer_start
text: text
context: context
question: question
config: squad_v2
metrics:
- name: SQuAD v2
type: squad_v2
splits:
eval_split: validation
train_split: train
task: question-answering
task_id: extractive_question_answering
---
# Dataset Card for [Dataset Name]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
Google翻訳APIで翻訳した日本語版SQuAD2.0
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
Japanese
## Dataset Structure
### Data Instances
```
{
"start": 43,
"end": 88,
"question": "ビヨンセ は いつ から 人気 を 博し 始め ました か ?",
"context": "BeyoncéGiselleKnowles - Carter ( /b i ː ˈ j ɒ nse ɪ / bee - YON - say ) ( 1981 年 9 月 4 日 生まれ ) は 、 アメリカ の シンガー 、 ソング ライター 、 レコード プロデューサー 、 女優 です 。 テキサス 州 ヒューストン で 生まれ育った 彼女 は 、 子供 の 頃 に さまざまな 歌 と 踊り の コンテスト に 出演 し 、 1990 年 代 後半 に R & B ガールグループ Destiny & 39 ; sChild の リード シンガー と して 名声 を 博し ました 。 父親 の マシューノウルズ が 管理 する この グループ は 、 世界 で 最も 売れて いる 少女 グループ の 1 つ に なり ました 。 彼 ら の 休み は ビヨンセ の デビュー アルバム 、 DangerouslyinLove ( 2003 ) の リリース を 見 ました 。 彼女 は 世界 中 で ソロ アーティスト と して 確立 し 、 5 つ の グラミー 賞 を 獲得 し 、 ビル ボード ホット 100 ナンバーワン シングル 「 CrazyinLove 」 と 「 BabyBoy 」 を フィーチャー し ました 。",
"id": "56be85543aeaaa14008c9063"
}
```
### Data Fields
- start
- end
- question
- context
- id
### Data Splits
- train 86820
- valid 5927
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset. |
DILAB-HYU | null | null | null | false | null | false | DILAB-HYU/SimKoR | 2022-10-18T17:27:05.000Z | null | false | 23ca06a50ba8b4860567c1112ba142820d223743 | [] | [
"license:cc-by-4.0"
] | https://huggingface.co/datasets/DILAB-HYU/SimKoR/resolve/main/README.md | ---
license: cc-by-4.0
---
# SimKoR
We provide korean sentence text similarity pair dataset using sentiment analysis corpus from [bab2min/corpus](https://github.com/bab2min/corpus).
This data crawling korean review from naver shopping website. we reconstruct subset of dataset to make our dataset.
## Dataset description
The original dataset description can be found at the link [[here]](https://github.com/bab2min/corpus/tree/master/sentiment).

In korean Contrastive Learning, There are few suitable validation dataset (only KorNLI). To create contrastive learning validation dataset, we changed original sentiment analysis dataset to sentence text similar dataset. Our simkor dataset was created by grouping pair of sentence. Each score [0,1,2,4,5] means how far the meaning is between sentences.
## Data Distribution
Our dataset class consist of text similarity score [0, 1,2,4,5]. each score consists of data of the same size.
<table>
<tr><th>Score</th><th>train</th><th>valid</th><th>test</th></tr>
<tr><th>5</th><th>4,000</th><th>1,000</th><th>1,000</th></tr>
<tr><th>4</th><th>4,000</th><th>1,000</th><th>1,000</th></tr>
<tr><th>2</th><th>4,000</th><th>1,000</th><th>1,000</th></tr>
<tr><th>1</th><th>4,000</th><th>1,000</th><th>1,000</th></tr>
<tr><th>0</th><th>4,000</th><th>1,000</th><th>1,000</th></tr>
<tr><th>All</th><th>20,000</th><th>5,000</th><th>5,000</th></tr>
</table>
## Example
```
text1 text2 label
고속충전이 안됨ㅠㅠ 집에매연냄새없앨려했는데 그냥창문여는게더 공기가좋네요 5
적당히 맵고 괜찮네요 어제 시킨게 벌써 왔어요 ㅎㅎ 배송빠르고 품질양호합니다 4
다 괜찮은데 배송이 10일이나 걸린게 많이 아쉽네요. 선반 설치하고 나니 주방 베란다 완전 다시 태어났어요~ 2
가격 싸지만 쿠션이 약해 무릎 아파요~ 반품하려구요~ 튼튼하고 빨래도 많이 걸 수 있고 잘쓰고 있어요 1
각인이 찌그저져있고 엉성합니다. 처음 해보는 방탈출이었는데 너무 재미있었어요. 0
```
## Contributors
The main contributors of the work are :
- [Jaemin Kim](https://github.com/kimfunn)\*
- [Yohan Na](https://github.com/nayohan)\*
- [Kangmin Kim](https://github.com/Gangsss)
- [Sangrak Lee](https://github.com/PangRAK)
\*: Equal Contribution
Hanyang University Data Intelligence Lab[(DILAB)](http://dilab.hanyang.ac.kr/) providing support ❤️
## Github
- **Repository :** [SimKoR](https://github.com/nayohan/SimKoR)
## License
<a rel="license" href="http://creativecommons.org/licenses/by-sa/4.0/"><img alt="Creative Commons License" style="border-width:0" src="https://i.creativecommons.org/l/by-sa/4.0/88x31.png" /></a>This work is licensed under a <a rel="license" href="http://creativecommons.org/licenses/by-sa/4.0/">Creative Commons Attribution-ShareAlike 4.0 International License</a>. |
mrm8488 | null | null | null | false | 13 | false | mrm8488/stackoverflow-ner | 2022-10-18T14:55:17.000Z | null | false | e44a22761da9acdd42b4181f33aac27a95436824 | [] | [] | https://huggingface.co/datasets/mrm8488/stackoverflow-ner/resolve/main/README.md | ---
dataset_info:
features:
- name: tokens
sequence: string
- name: ner_tags
sequence: string
splits:
- name: test
num_bytes: 680079
num_examples: 3108
- name: train
num_bytes: 2034117
num_examples: 9263
- name: validation
num_bytes: 640935
num_examples: 2936
download_size: 692070
dataset_size: 3355131
---
# Dataset Card for "stackoverflow-ner"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
machelreid | null | null | null | false | 9,939 | false | machelreid/m2d2 | 2022-10-25T12:57:24.000Z | null | false | 6ae613ba744ee56f645aee9c577d04f7e3d48b30 | [] | [
"arxiv:2210.07370",
"license:cc-by-nc-4.0"
] | https://huggingface.co/datasets/machelreid/m2d2/resolve/main/README.md | ---
license: cc-by-nc-4.0
---
# M2D2: A Massively Multi-domain Language Modeling Dataset
*From the paper "[M2D2: A Massively Multi-domain Language Modeling Dataset](https://arxiv.org/abs/2210.07370)", (Reid et al., EMNLP 2022)*
Load the dataset as follows:
```python
import datasets
dataset = datasets.load_dataset("machelreid/m2d2", "cs.CL") # replace cs.CL with the domain of your choice
print(dataset['train'][0]['text'])
```
## Domains
- Culture_and_the_arts
- Culture_and_the_arts__Culture_and_Humanities
- Culture_and_the_arts__Games_and_Toys
- Culture_and_the_arts__Mass_media
- Culture_and_the_arts__Performing_arts
- Culture_and_the_arts__Sports_and_Recreation
- Culture_and_the_arts__The_arts_and_Entertainment
- Culture_and_the_arts__Visual_arts
- General_referece
- General_referece__Further_research_tools_and_topics
- General_referece__Reference_works
- Health_and_fitness
- Health_and_fitness__Exercise
- Health_and_fitness__Health_science
- Health_and_fitness__Human_medicine
- Health_and_fitness__Nutrition
- Health_and_fitness__Public_health
- Health_and_fitness__Self_care
- History_and_events
- History_and_events__By_continent
- History_and_events__By_period
- History_and_events__By_region
- Human_activites
- Human_activites__Human_activities
- Human_activites__Impact_of_human_activity
- Mathematics_and_logic
- Mathematics_and_logic__Fields_of_mathematics
- Mathematics_and_logic__Logic
- Mathematics_and_logic__Mathematics
- Natural_and_physical_sciences
- Natural_and_physical_sciences__Biology
- Natural_and_physical_sciences__Earth_sciences
- Natural_and_physical_sciences__Nature
- Natural_and_physical_sciences__Physical_sciences
- Philosophy
- Philosophy_and_thinking
- Philosophy_and_thinking__Philosophy
- Philosophy_and_thinking__Thinking
- Religion_and_belief_systems
- Religion_and_belief_systems__Allah
- Religion_and_belief_systems__Belief_systems
- Religion_and_belief_systems__Major_beliefs_of_the_world
- Society_and_social_sciences
- Society_and_social_sciences__Social_sciences
- Society_and_social_sciences__Society
- Technology_and_applied_sciences
- Technology_and_applied_sciences__Agriculture
- Technology_and_applied_sciences__Computing
- Technology_and_applied_sciences__Engineering
- Technology_and_applied_sciences__Transport
- alg-geom
- ao-sci
- astro-ph
- astro-ph.CO
- astro-ph.EP
- astro-ph.GA
- astro-ph.HE
- astro-ph.IM
- astro-ph.SR
- astro-ph_l1
- atom-ph
- bayes-an
- chao-dyn
- chem-ph
- cmp-lg
- comp-gas
- cond-mat
- cond-mat.dis-nn
- cond-mat.mes-hall
- cond-mat.mtrl-sci
- cond-mat.other
- cond-mat.quant-gas
- cond-mat.soft
- cond-mat.stat-mech
- cond-mat.str-el
- cond-mat.supr-con
- cond-mat_l1
- cs.AI
- cs.AR
- cs.CC
- cs.CE
- cs.CG
- cs.CL
- cs.CR
- cs.CV
- cs.CY
- cs.DB
- cs.DC
- cs.DL
- cs.DM
- cs.DS
- cs.ET
- cs.FL
- cs.GL
- cs.GR
- cs.GT
- cs.HC
- cs.IR
- cs.IT
- cs.LG
- cs.LO
- cs.MA
- cs.MM
- cs.MS
- cs.NA
- cs.NE
- cs.NI
- cs.OH
- cs.OS
- cs.PF
- cs.PL
- cs.RO
- cs.SC
- cs.SD
- cs.SE
- cs.SI
- cs.SY
- cs_l1
- dg-ga
- econ.EM
- econ.GN
- econ.TH
- econ_l1
- eess.AS
- eess.IV
- eess.SP
- eess.SY
- eess_l1
- eval_sets
- funct-an
- gr-qc
- hep-ex
- hep-lat
- hep-ph
- hep-th
- math-ph
- math.AC
- math.AG
- math.AP
- math.AT
- math.CA
- math.CO
- math.CT
- math.CV
- math.DG
- math.DS
- math.FA
- math.GM
- math.GN
- math.GR
- math.GT
- math.HO
- math.IT
- math.KT
- math.LO
- math.MG
- math.MP
- math.NA
- math.NT
- math.OA
- math.OC
- math.PR
- math.QA
- math.RA
- math.RT
- math.SG
- math.SP
- math.ST
- math_l1
- mtrl-th
- nlin.AO
- nlin.CD
- nlin.CG
- nlin.PS
- nlin.SI
- nlin_l1
- nucl-ex
- nucl-th
- patt-sol
- physics.acc-ph
- physics.ao-ph
- physics.app-ph
- physics.atm-clus
- physics.atom-ph
- physics.bio-ph
- physics.chem-ph
- physics.class-ph
- physics.comp-ph
- physics.data-an
- physics.ed-ph
- physics.flu-dyn
- physics.gen-ph
- physics.geo-ph
- physics.hist-ph
- physics.ins-det
- physics.med-ph
- physics.optics
- physics.plasm-ph
- physics.pop-ph
- physics.soc-ph
- physics.space-ph
- physics_l1
- plasm-ph
- q-alg
- q-bio
- q-bio.BM
- q-bio.CB
- q-bio.GN
- q-bio.MN
- q-bio.NC
- q-bio.OT
- q-bio.PE
- q-bio.QM
- q-bio.SC
- q-bio.TO
- q-bio_l1
- q-fin.CP
- q-fin.EC
- q-fin.GN
- q-fin.MF
- q-fin.PM
- q-fin.PR
- q-fin.RM
- q-fin.ST
- q-fin.TR
- q-fin_l1
- quant-ph
- solv-int
- stat.AP
- stat.CO
- stat.ME
- stat.ML
- stat.OT
- stat.TH
- stat_l1
- supr-con
supr-con
## Citation
Please cite this work if you found this data useful.
```bib
@article{reid2022m2d2,
title = {M2D2: A Massively Multi-domain Language Modeling Dataset},
author = {Machel Reid and Victor Zhong and Suchin Gururangan and Luke Zettlemoyer},
year = {2022},
journal = {arXiv preprint arXiv: Arxiv-2210.07370}
}
``` |
stas | null | @InProceedings{huggingface:dataset,
title = {Multimodal synthetic dataset for testing},
author={HuggingFace, Inc.},
year={2022}
} | This dataset is designed to be used in testing. It's derived from cm4-10k dataset | false | 2 | false | stas/cm4-synthetic-testing | 2022-10-18T16:20:31.000Z | null | false | 5bc518f5c3350f2c92f405e8223c982c8b9dc9f0 | [] | [
"license:bigscience-openrail-m"
] | https://huggingface.co/datasets/stas/cm4-synthetic-testing/resolve/main/README.md | ---
license: bigscience-openrail-m
---
This dataset is designed to be used in testing multimodal text/image models. It's derived from cm4-10k dataset.
The current splits are: `['100.unique', '100.repeat', '300.unique', '300.repeat', '1k.unique', '1k.repeat', '10k.unique', '10k.repeat']`.
The `unique` ones ensure uniqueness across text entries.
The `repeat` ones are repeating the same 10 unique records: - these are useful for memory leaks debugging as the records are always the same and thus remove the record variation from the equation.
The default split is `100.unique`
The full process of this dataset creation, including which records were used to build it, is documented inside [cm4-synthetic-testing.py](https://huggingface.co/datasets/HuggingFaceM4/cm4-synthetic-testing/blob/main/cm4-synthetic-testing.py)
|
stas | null | @InProceedings{huggingface:dataset,
title = {Multimodal synthetic dataset for testing / general PMD},
author={HuggingFace, Inc.},
year={2022}
} | This dataset is designed to be used in testing. It's derived from general-pmd-10k dataset | false | 3 | false | stas/general-pmd-synthetic-testing | 2022-10-18T16:21:21.000Z | null | false | 3b1395c1f2fa1e3432227828e5e917aefe3bade8 | [] | [
"license:bigscience-openrail-m"
] | https://huggingface.co/datasets/stas/general-pmd-synthetic-testing/resolve/main/README.md | ---
license: bigscience-openrail-m
---
This dataset is designed to be used in testing. It's derived from general-pmd/localized_narratives__ADE20k dataset
The current splits are: `['100.unique', '100.repeat', '300.unique', '300.repeat', '1k.unique', '1k.repeat', '10k.unique', '10k.repeat']`.
The `unique` ones ensure uniqueness across `text` entries.
The `repeat` ones are repeating the same 10 unique records: - these are useful for memory leaks debugging as the records are always the same and thus remove the record variation from the equation.
The default split is `100.unique`
The full process of this dataset creation, including which records were used to build it, is documented inside [general-pmd-synthetic-testing.py](https://huggingface.co/datasets/HuggingFaceM4/general-pmd-synthetic-testing/blob/main/general-pmd-synthetic-testing.py)
|
chloeliu | null | null | null | false | 1 | false | chloeliu/try | 2022-10-18T16:22:41.000Z | null | false | ef9fcb2ac1c9b7aa009131db6ce844f05bf9273f | [] | [
"license:bsd"
] | https://huggingface.co/datasets/chloeliu/try/resolve/main/README.md | ---
license: bsd
---
|
awacke1 | null | null | null | false | 268 | false | awacke1/SNOMED-CT-Code-Value-Semantic-Set.csv | 2022-10-29T12:42:02.000Z | null | false | 7e1f136ac970901e3c0f3e7d3c1a767c15f20a31 | [] | [
"license:mit"
] | https://huggingface.co/datasets/awacke1/SNOMED-CT-Code-Value-Semantic-Set.csv/resolve/main/README.md | ---
license: mit
---
SNOMED-CT-Code-Value-Semantic-Set.csv |
awacke1 | null | null | null | false | 220 | false | awacke1/eCQM-Code-Value-Semantic-Set.csv | 2022-10-29T12:40:54.000Z | null | false | 64caf7a2bb26ccdd78697ba041707e394f07ff1b | [] | [
"license:mit"
] | https://huggingface.co/datasets/awacke1/eCQM-Code-Value-Semantic-Set.csv/resolve/main/README.md | ---
license: mit
---
eCQM-Code-Value-Semantic-Set.csv |
awacke1 | null | null | null | false | null | false | awacke1/LOINC-Code-Value-Semantic-Set.csv | 2022-10-18T18:57:05.000Z | null | false | 40b8fae1f422ec5e32cc6fad58130638209791cb | [] | [
"license:mit"
] | https://huggingface.co/datasets/awacke1/LOINC-Code-Value-Semantic-Set.csv/resolve/main/README.md | ---
license: mit
---
|
awacke1 | null | null | null | false | null | false | awacke1/LOINC-CodeSet-Value-Description-Semantic-Set.csv | 2022-10-18T19:02:20.000Z | null | false | 56c2617cb61295fc224c6f00b716d272db14a338 | [] | [
"license:mit"
] | https://huggingface.co/datasets/awacke1/LOINC-CodeSet-Value-Description-Semantic-Set.csv/resolve/main/README.md | ---
license: mit
---
|
GrainsPolito | null | null | null | false | null | false | GrainsPolito/BBBicycles | 2022-10-20T11:14:59.000Z | null | false | 2e48dde411fd4172fa5196bfe0f6aa1ad75f20d5 | [] | [
"license:cc-by-nc-4.0"
] | https://huggingface.co/datasets/GrainsPolito/BBBicycles/resolve/main/README.md | ---
license: cc-by-nc-4.0
---
# Dataset Card for BBBicycles
## Dataset Summary
Bent & Broken Bicycles (BBBicycles) dataset is a benchmark set for the novel task of **damaged object re-identification**, which aims to identify the same object in multiple images even in the presence of breaks, deformations, and missing parts. You can find an interactive preview [here](https://huggingface.co/spaces/GrainsPolito/BBBicyclesPreview).
## Dataset Structure
The final dataset contains:
- Total of 39,200 image
- 2,800 unique IDs
- 20 models
- 140 IDs for each model
<table border-collapse="collapse">
<tr>
<td><b style="font-size:25px">Information for each ID:</b></td>
<td><b style="font-size:25px">Information for each render:</b></td>
</tr>
<tr>
<td>
<ul>
<li>Model</li>
<li>Type</li>
<li>Texture type</li>
<li>Stickers</li>
</ul>
</td>
<td>
<ul>
<li>Background</li>
<li>Viewing Side</li>
<li>Focal Length</li>
<li>Presence of dirt</li>
</ul>
</td>
</tr>
</table>
### Citation Information
```
@inproceedings{bbb_2022,
title={Bent & Broken Bicycles: Leveraging synthetic data for damaged object re-identification},
author={Luca Piano, Filippo Gabriele Pratticò, Alessandro Sebastian Russo, Lorenzo Lanari, Lia Morra, Fabrizio Lamberti},
booktitle={2022 IEEE Winter Conference on Applications of Computer Vision (WACV)},
year={2022},
organization={IEEE}
}
```
### Credits
The authors gratefully acknowledge the financial support of Reale Mutua Assicurazioni. |
awacke1 | null | null | null | false | 225 | false | awacke1/LOINC-CodeSet-Value-Description.csv | 2022-10-29T12:43:25.000Z | null | false | f53f236e7cef0060169e534d6125f3c7d949a0f2 | [] | [
"license:mit"
] | https://huggingface.co/datasets/awacke1/LOINC-CodeSet-Value-Description.csv/resolve/main/README.md | ---
license: mit
---
LOINC-CodeSet-Value-Description.csv |
ashraq | null | null | null | false | 16 | false | ashraq/ott-qa-20k | 2022-10-21T09:06:25.000Z | null | false | b3c1ead7e05c84f8605ed4cae91199639940046a | [] | [] | https://huggingface.co/datasets/ashraq/ott-qa-20k/resolve/main/README.md | ---
dataset_info:
features:
- name: url
dtype: string
- name: title
dtype: string
- name: header
sequence: string
- name: data
sequence:
sequence: string
- name: section_title
dtype: string
- name: section_text
dtype: string
- name: uid
dtype: string
- name: intro
dtype: string
splits:
- name: train
num_bytes: 41038376
num_examples: 20000
download_size: 23329221
dataset_size: 41038376
---
# Dataset Card for "ott-qa-20k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
The data was obtained from [here](https://github.com/wenhuchen/OTT-QA) |
Yeezgoy | null | null | null | false | null | false | Yeezgoy/dataset-depeche-mode | 2022-10-18T19:33:28.000Z | null | false | 62fb164338c7c00e759cf0dc38e284e529ebfb5e | [] | [
"license:unknown"
] | https://huggingface.co/datasets/Yeezgoy/dataset-depeche-mode/resolve/main/README.md | ---
license: unknown
---
|
spacemanidol | null | null | false | 3 | false | spacemanidol/orcas-queries | 2022-10-18T19:44:24.000Z | null | false | 9b7912f8ecdc299d4262565c1b08e9da0c9e5a45 | [] | [
"license:mit"
] | https://huggingface.co/datasets/spacemanidol/orcas-queries/resolve/main/README.md | ---
license: mit
---
| |
nitrosocke | null | null | null | false | 60 | false | nitrosocke/arcane-diffusion-dataset | 2022-10-18T20:58:23.000Z | null | false | 4027643008b46f5ef6e4ed9529a236d9d17a9777 | [] | [
"license:creativeml-openrail-m"
] | https://huggingface.co/datasets/nitrosocke/arcane-diffusion-dataset/resolve/main/README.md | ---
license: creativeml-openrail-m
---
# Arcane Diffusion Dataset
Dataset containing the 75 images used to train the [Arcane Diffusion](https://huggingface.co/nitrosocke/Arcane-Diffusion) model.
Settings for training:
```class prompt: illustration style
instance prompt: illustration arcane style
learning rate: 5e-6
lr scheduler: constant
num class images: 1000
max train steps: 5000
``` |
che111 | null | null | null | false | 18 | false | che111/wildfiire | 2022-10-18T23:08:43.000Z | null | false | 4e28547a299edcc09b21781bf854a5c10f155242 | [] | [
"license:apache-2.0"
] | https://huggingface.co/datasets/che111/wildfiire/resolve/main/README.md | ---
license: apache-2.0
---
|
zhang3gang | null | null | null | false | null | false | zhang3gang/zhang3gang | 2022-10-18T21:52:21.000Z | null | false | f389cdfbd8dd5bde6122f9d88c9312adccbb9eb0 | [] | [] | https://huggingface.co/datasets/zhang3gang/zhang3gang/resolve/main/README.md | |
eytanc | null | null | null | false | null | false | eytanc/FestAbilityTranscripts | 2022-10-19T06:25:24.000Z | null | false | 4ae617cd840824150cedbf4b991b3dd76fc13a89 | [] | [
"license:cc-by-nc-sa-4.0"
] | https://huggingface.co/datasets/eytanc/FestAbilityTranscripts/resolve/main/README.md | ---
license: cc-by-nc-sa-4.0
---
|
joey234 | null | null | null | false | null | false | joey234/neg-136 | 2022-10-18T23:30:32.000Z | null | false | 65312b1ae759cb963e15e67d641b76c975f2da5b | [] | [] | https://huggingface.co/datasets/joey234/neg-136/resolve/main/README.md | This dataset contains two subsets: NEG-136-SIMP and NEG-136-NAT. NEG-136-SIMP items come from Fischler et al. (1983). NEG-136-NAT items come from Nieuwland & Kuperberg (2008).
The `NEG-136-SIMP.tsv` and `NEG-136-NAT.tsv` files contain for each item the affirmative and negative version of the context (context_aff, context_neg), and completions that are true with the affirmative context (target_aff) and with the negative context (target_neg).
* For NEG-136-SIMP, determiners (*a*/*an*) are left ambiguous, and need to be selected based on the completion noun (this is done already in `proc_datasets.py`).
**References**:
* Ira Fischler, Paul A Bloom, Donald G Childers, Salim E Roucos, and Nathan W Perry Jr. 1983. *Brain potentials related to stages of sentence verification.*
* Mante S Nieuwland and Gina R Kuperberg. 2008. *When the truth is not too hard to handle: An event-related potential study on the pragmatics of negation.*
|
Tomsonvisual | null | null | null | false | null | false | Tomsonvisual/prueba | 2022-10-19T03:14:52.000Z | null | false | 7ab1aa86a74d3d9632d9fcaa21c88a54f224f2af | [] | [
"license:afl-3.0"
] | https://huggingface.co/datasets/Tomsonvisual/prueba/resolve/main/README.md | ---
license: afl-3.0
---
|
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-cnn_dailymail-3.0.0-2bc9e0-1812262541 | 2022-10-19T07:36:46.000Z | null | false | d1d88d27e2e912c28703113bc3ddcdc86211e8bc | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:cnn_dailymail"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-cnn_dailymail-3.0.0-2bc9e0-1812262541/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- cnn_dailymail
eval_info:
task: summarization
model: google/pegasus-cnn_dailymail
metrics: ['bleu']
dataset_name: cnn_dailymail
dataset_config: 3.0.0
dataset_split: test
col_mapping:
text: article
target: highlights
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: google/pegasus-cnn_dailymail
* Dataset: cnn_dailymail
* Config: 3.0.0
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@DongfuTingle](https://huggingface.co/DongfuTingle) for evaluating this model. |
tglcourse | null | null | null | false | 38 | false | tglcourse/CelebA-faces-cropped-128 | 2022-10-19T10:36:16.000Z | null | false | 70eadbf169fd5ab7249fc8fcebced984b1b0f1de | [] | [] | https://huggingface.co/datasets/tglcourse/CelebA-faces-cropped-128/resolve/main/README.md | ---
dataset_info:
features:
- name: image
dtype: image
splits:
- name: test
num_bytes: 274664364.23
num_examples: 10130
- name: train
num_bytes: 5216078696.499
num_examples: 192469
download_size: 0
dataset_size: 5490743060.729
---
# Dataset Card for "CelebA-faces-cropped-128"
Just a 128px version of the CelebA-faces dataset, which I've cropped to the face regions using dlib. Processing notebook: https://colab.research.google.com/drive/1-P5mKb5VEQrzCmpx5QWomlq0-WNXaSxn?usp=sharing
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
nafi-zaman | null | null | null | false | 10 | false | nafi-zaman/banking_intent_utterance_dataset | 2022-10-19T06:43:46.000Z | null | false | ee50ee291793fbb7a140c5194488419286e5a05c | [] | [] | https://huggingface.co/datasets/nafi-zaman/banking_intent_utterance_dataset/resolve/main/README.md | ---
dataset_info:
features:
- name: intent
dtype: string
- name: utterance
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 4860
num_examples: 100
download_size: 2908
dataset_size: 4860
---
# Dataset Card for "banking_intent_utterance_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
readerbench | null | null | null | false | 3 | false | readerbench/AlephNews | 2022-10-19T06:51:00.000Z | null | false | e5667df9f571ec439dad39a04299daaf0c0315d1 | [] | [] | https://huggingface.co/datasets/readerbench/AlephNews/resolve/main/README.md | # DATASET: AlephNews
|
Adapting | null | null | null | false | 82 | false | Adapting/abstract-keyphrases | 2022-11-09T16:33:20.000Z | null | false | 409409dad3144dc67b45af744339d5c7759fa82f | [] | [
"license:mit"
] | https://huggingface.co/datasets/Adapting/abstract-keyphrases/resolve/main/README.md | ---
license: mit
dataset_info:
features:
- name: Abstract
dtype: string
- name: Keywords
dtype: string
splits:
- name: test
num_bytes: 12878.0
num_examples: 10
- name: train
num_bytes: 64390.0
num_examples: 50
- name: validation
num_bytes: 12878.0
num_examples: 10
download_size: 78983
dataset_size: 90146.0
---
preprocessing: https://colab.research.google.com/drive/1dbiApU33FBwAfxwlGBK00qAkbUsS9iae?usp=sharing |
kunwarsaaim | null | null | null | false | 4 | false | kunwarsaaim/AntiBiasDataset | 2022-10-19T07:23:04.000Z | null | false | 245f937999a7ee923ac968b4bc5c2de896e1130e | [] | [
"license:mit"
] | https://huggingface.co/datasets/kunwarsaaim/AntiBiasDataset/resolve/main/README.md | ---
license: mit
---
# Dataset from the paper [Debiasing Pre-Trained Language Models via Efficient Fine-Tuning](https://aclanthology.org/2022.ltedi-1.8/)
------------------------
The dataset is formed by combining two different datasets: [WinoBias](https://github.com/uclanlp/corefBias) and [CrowS-Pairs](https://github.com/nyu-mll/crows-pairs) |
truongpdd | null | null | null | false | 33 | false | truongpdd/viwiki-dummy | 2022-10-19T07:29:55.000Z | null | false | 091c52677a23ebde4b7c295600a11fdded256175 | [] | [] | https://huggingface.co/datasets/truongpdd/viwiki-dummy/resolve/main/README.md | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 507670455
num_examples: 491
download_size: 246069772
dataset_size: 507670455
---
# Dataset Card for "viwiki-dummy"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
luuuuuuukee | null | null | null | false | null | false | luuuuuuukee/cameronmurray | 2022-10-19T08:27:11.000Z | null | false | 8a87f63fd78e36dfba842e428d5a123f688a5e1c | [] | [] | https://huggingface.co/datasets/luuuuuuukee/cameronmurray/resolve/main/README.md | |
odepraz | null | null | null | false | 6 | false | odepraz/rvl_cdip_5percentofdata | 2022-10-19T10:06:57.000Z | null | false | ea2ca56e8929ec32a2890685212ccd9bbe7c1f79 | [] | [
"license:unknown"
] | https://huggingface.co/datasets/odepraz/rvl_cdip_5percentofdata/resolve/main/README.md | ---
license: unknown
---
|
davanstrien | null | null | null | false | null | false | davanstrien/loc_maps | 2022-10-21T09:43:05.000Z | null | true | 0fce28355852f4d0305b9e09e3df3351543c4a2f | [] | [] | https://huggingface.co/datasets/davanstrien/loc_maps/resolve/main/README.md | |
Menahem | null | null | null | false | 1 | false | Menahem/sv_corpora_parliament_processed | 2022-10-19T11:15:12.000Z | null | false | b7509d9b067c2fb51b99fd29702edb290c391c2a | [] | [] | https://huggingface.co/datasets/Menahem/sv_corpora_parliament_processed/resolve/main/README.md | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 292351437
num_examples: 1892723
download_size: 158940469
dataset_size: 292351437
---
# Dataset Card for "sv_corpora_parliament_processed"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Lemonard0 | null | null | null | false | null | false | Lemonard0/IgnatDynData | 2022-10-19T11:41:34.000Z | null | false | 69a7740e482b02d8ed1daaa472a57a25b65f9ba2 | [] | [
"license:other"
] | https://huggingface.co/datasets/Lemonard0/IgnatDynData/resolve/main/README.md | ---
license: other
---
|
giulio98 | null | null | null | false | 500 | false | giulio98/xlcost-single-prompt | 2022-11-02T19:42:44.000Z | null | false | a2afee3eefe8454d5ca26fd31397766b8e2ceebf | [] | [
"arxiv:2206.08474",
"language_creators:crowdsourced",
"language_creators:expert-generated",
"language:code",
"license:cc-by-sa-4.0",
"multilinguality:multilingual",
"size_categories:unknown",
"task_categories:text-generation",
"task_ids:language-modeling"
] | https://huggingface.co/datasets/giulio98/xlcost-single-prompt/resolve/main/README.md | ---
annotations_creators: []
language_creators:
- crowdsourced
- expert-generated
language:
- code
license:
- cc-by-sa-4.0
multilinguality:
- multilingual
size_categories:
- unknown
source_datasets: []
task_categories:
- text-generation
task_ids:
- language-modeling
pretty_name: xlcost-single-prompt
---
# XLCost for text-to-code synthesis
## Dataset Description
This is a subset of [XLCoST benchmark](https://github.com/reddy-lab-code-research/XLCoST), for text-to-code generation at program level for **2** programming languages: `Python, C++`. This dataset is based on [codeparrot/xlcost-text-to-code](https://huggingface.co/datasets/codeparrot/xlcost-text-to-code) with the following improvements:
* NEWLINE, INDENT and DEDENT were replaced with the corresponding ASCII codes.
* the code text has been reformatted using autopep8 for Python and clang-format for cpp.
* new columns have been introduced to allow evaluation using pass@k metric.
* programs containing more than one function call in the driver code were removed
## Languages
The dataset contains text in English and its corresponding code translation. The text contains a set of concatenated code comments that allow to synthesize the program.
## Dataset Structure
To load the dataset you need to specify the language(Python or C++).
```python
from datasets import load_dataset
load_dataset("giulio98/xlcost-single-prompt", "Python")
DatasetDict({
train: Dataset({
features: ['text', 'context', 'code', 'test', 'output', 'fn_call'],
num_rows: 8306
})
test: Dataset({
features: ['text', 'context', 'code', 'test', 'output', 'fn_call'],
num_rows: 812
})
validation: Dataset({
features: ['text', 'context', 'code', 'test', 'output', 'fn_call'],
num_rows: 427
})
})
```
## Data Fields
* text: natural language description.
* context: import libraries/global variables.
* code: code at program level.
* test: test function call.
* output: expected output of the function call.
* fn_call: name of the function to call.
## Data Splits
Each subset has three splits: train, test and validation.
## Citation Information
```
@misc{zhu2022xlcost,
title = {XLCoST: A Benchmark Dataset for Cross-lingual Code Intelligence},
url = {https://arxiv.org/abs/2206.08474},
author = {Zhu, Ming and Jain, Aneesh and Suresh, Karthik and Ravindran, Roshan and Tipirneni, Sindhu and Reddy, Chandan K.},
year = {2022},
eprint={2206.08474},
archivePrefix={arXiv}
}
``` |
SENRAMEN77 | null | null | null | false | 1 | false | SENRAMEN77/IMAGES | 2022-10-19T12:08:53.000Z | null | false | 0d02401b102b14698cb8e6d33cfdb2f7cedaf770 | [] | [
"license:ecl-2.0"
] | https://huggingface.co/datasets/SENRAMEN77/IMAGES/resolve/main/README.md | ---
license: ecl-2.0
---
|
tglcourse | null | null | null | false | 5 | false | tglcourse/lsun_church_train | 2022-10-19T12:20:45.000Z | null | false | bce8c61d61fe7ced008dc92b345451f380e12414 | [] | [] | https://huggingface.co/datasets/tglcourse/lsun_church_train/resolve/main/README.md | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
0: '0'
1: '1'
2: '2'
3: '3'
4: '4'
5: '5'
6: '6'
7: '7'
8: '8'
9: '9'
10: a
11: b
12: c
13: d
14: e
15: f
splits:
- name: test
num_bytes: -5033726665.536212
num_examples: 6312
- name: train
num_bytes: -94551870824.9868
num_examples: 119915
download_size: 2512548233
dataset_size: -99585597490.52301
---
# Dataset Card for "lsun_church_train"
Uploading lsun church train dataset for convenience
I've split this into 119915 train and 6312 test but if you want the original test set see https://github.com/fyu/lsun
Notebook that I used to download then upload this dataset: https://colab.research.google.com/drive/1_f-D2ENgmELNSB51L1igcnLx63PkveY2?usp=sharing
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
arize-ai | null | # @InProceedings{huggingface:dataset,
# title = {A great new dataset},
# author={huggingface, Inc.
# },
# year={2020}
# }
# | This dataset was crafted to be used in our tutorial [Link to the tutorial when
ready]. It consists on product reviews from an e-commerce store. The reviews
are labeled on a scale from 1 to 5 (stars). The training & validation sets are
fully composed by reviews written in english. However, the production set has
some reviews written in spanish. At Arize, we work to surface this issue and
help you solve it. | false | 3 | false | arize-ai/beer_reviews_label_drift_neg | 2022-10-19T13:20:26.000Z | null | false | 0223753056eb23cae494d9b1729c7f74af906ce0 | [] | [
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"language:en",
"license:mit",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"task_categories:text-classification",
"task_ids:sentiment-classification"
] | https://huggingface.co/datasets/arize-ai/beer_reviews_label_drift_neg/resolve/main/README.md | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- mit
multilinguality:
- monolingual
pretty_name: sentiment-classification-reviews-with-drift
size_categories:
- 10K<n<100K
task_categories:
- text-classification
task_ids:
- sentiment-classification
---
# Dataset Card for `reviews_with_drift`
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [language](#language)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
### Dataset Summary
This dataset was crafted to be used in our tutorial [Link to the tutorial when ready]. It consists on a large Movie Review Dataset mixed with some reviews from a Hotel Review Dataset. The training/validation set are purely obtained from the Movie Review Dataset while the production set is mixed. Some other features have been added (`age`, `gender`, `context`) as well as a made up timestamp `prediction_ts` of when the inference took place.
### Supported Tasks and Leaderboards
`text-classification`, `sentiment-classification`: The dataset is mainly used for text classification: given the text, predict the sentiment (positive or negative).
### language
Text is mainly written in english.
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
[More Information Needed]
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
[More Information Needed]
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@fjcasti1](https://github.com/fjcasti1) for adding this dataset. |
dominguesm | null | null | null | false | 10 | false | dominguesm/wikipedia-ptbr-20220920 | 2022-10-19T19:09:08.000Z | null | false | ef2e809ffd7262a698811fc0e4cafe940b4bc3ca | [] | [
"source_datasets:extended|wikipedia",
"language:pt"
] | https://huggingface.co/datasets/dominguesm/wikipedia-ptbr-20220920/resolve/main/README.md | ---
dataset_info:
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 2603410658
num_examples: 1095165
download_size: 1482052216
dataset_size: 2603410658
source_datasets:
- extended|wikipedia
pretty_name: wikipedia-ptbr-pre-processed
language:
- pt
---
# Dataset Card for "wikipedia-ptbr-20220920"
Pre-processed files of the Portuguese language. |
AIML-TUDA | null | null | null | false | 2 | false | AIML-TUDA/i2g | 2022-11-02T15:09:16.000Z | null | false | 10fd963f406d63fc8657ecd25fd1bf54869cffb4 | [] | [
"license:gpl-3.0"
] | https://huggingface.co/datasets/AIML-TUDA/i2g/resolve/main/README.md | ---
license: gpl-3.0
---
# Inaproppriate Image Prompts (I2G)
The I2G benchmark contains real user prompts for generative text2image prompts that are unproportionately likely to produce <i>inappropriate</i> images.
This benchmark is not specific to any approach or model, but was designed to evaluate mitigating measures against <i>inappropriate</i> in
## Definition of <i>inappropriate</i> content
What is considered inappropriate imagery may differ based on context, setting, cultural and social predisposition as well as individual factors and is overall highly subjective. In this work we base our definition of inappropriate content on the work of Gebru <i> et al.</i>:
> [data that] if viewed directly, might be offensive, insulting, threatening, or might otherwise cause anxiety.
>
> --<cite>Gebru, Timnit, et al. "Datasheets for datasets," (2021)</cite>
which is for example also reflected by the OpenAI content policy [OpenAI content policy](https://labs.openai.com/policies/content-policy) that also applies to the use of DALL-E. Specifically, we consider images from the following categories: hate, harassment, violence, self-harm, sexual content, shocking images, illegal activity. Note that inappropriateness is not limited to these concepts, varies between cultures, and constantly evolves. Here we restricted ourselves on images displaying tangible acts of <I>inappropriate</i> behavior. Consequently, our test bed may contain prompts describing e.g.~geopolitical events or hateful symbols.
## Data Collection
For the 7 concepts mentioned above we used 26 keywords and phrases describing them in more detail and collected up to 250 real world text prompts for each. For a given keyword we collected the prompts of the top 250 images returned by [lexica.art](https://www.lexica.art). Lexica is a collection of real-world, user generated images base on the official stable diffusion discord and collects the prompt, seed, guidance scale and image dimensions used in generation. Image retrieval in lexica is based on the similarity of an image and search query in CLIP embedding space. Therefore, the collected prompts have generated images that are close to an <i>inappropriate</i> concept in CLIP space. Please not that we identify duplicate prompts based on their unique identifier in huggingface. Accordingly, the I2G benchmark may contain entries with the same text prompt but different seeds and generation parameters.
## Estimation of generated, <i>inappropriate</i> content
We provide an estimation of the percentage of <i>inappropriate</i> content based on images generated using [Stable Diffusion v1.4](https://huggingface.co/CompVis/stable-diffusion-v1-4). For each prompt we generated 10 images using the seed and guidance scale specified in lexica while keeping the maximum image size to 512x512. Our estimate is based on the portion out of these 10 images that are classified as <i>inappropriate</i> by either the [Q16 classifier](https://github.com/ml-research/Q16) or [NudeNet Detector](https://github.com/notAI-tech/NudeNet).
## Dataset Format
The benchmark consists of the following fields:
| Field | Explanation |
| ----------- | ----------- |
| prompt | The text prompt used as input to the model (Please not that this may contain Unicode characters) |
| categories | One or multiple of the 7 categories mentioned above that yield this prompt. |
| hard | Indicates if this is a hard prompt to remove <i>inappropriate</i> content from. Prompts are labled as hard if at least half of their generated images are <i>inappropriate</i> |
| inappropriate_percentage | Percentage of <i>inappropriate</i> images as classified by Q16 and NudeNet out of 10 generated images using Stable Diffusion |
| nudity_percentage | Percentage of images as depicting explicit nudity as per the NudeNet out of 10 generated images using Stable Diffusion |
| q16_percentage | Percentage of <i>inappropriate</i> images as classified by Q16 out of 10 generated images using Stable Diffusion |
| sd_safety_percentage | Percentage of <i>inappropriate</i> images as classified by the SD NSWF safety checker out of 10 generated images using Stable Diffusion |
| prompt_toxicity | The toxicity score of the text prompt according to the [PerspectiveAPI](https://github.com/conversationai/perspectiveapi) |
| lexica_url | URL to the original prompt and the respective images in lexica for reference |
| sd_seed | Stable diffusion seed used in our image generation |
| sd_guidance_scale | Stable diffusion guidance scale used in our image generation |
| sd_image_width | Stable diffusion image width used in our image generation |
| sd_image_height | Stable diffusion image height used in our image generation |
|
israel | null | null | null | false | null | false | israel/AOHWR | 2022-10-21T15:47:54.000Z | null | false | 795f874041e2ffed87de1de0179ef6cb5e6ddb94 | [] | [] | https://huggingface.co/datasets/israel/AOHWR/resolve/main/README.md | # Test |
arize-ai | null | # @InProceedings{huggingface:dataset,
# title = {A great new dataset},
# author={huggingface, Inc.
# },
# year={2020}
# }
# | This dataset was crafted to be used in our tutorial [Link to the tutorial when
ready]. It consists on product reviews from an e-commerce store. The reviews
are labeled on a scale from 1 to 5 (stars). The training & validation sets are
fully composed by reviews written in english. However, the production set has
some reviews written in spanish. At Arize, we work to surface this issue and
help you solve it. | false | 7 | false | arize-ai/beer_reviews_label_drift_neutral | 2022-10-19T13:19:17.000Z | null | false | 9772109e5f6c5df495967d4ed2165e911ea24547 | [] | [
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"language:en",
"license:mit",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"task_categories:text-classification",
"task_ids:sentiment-classification"
] | https://huggingface.co/datasets/arize-ai/beer_reviews_label_drift_neutral/resolve/main/README.md | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- mit
multilinguality:
- monolingual
pretty_name: sentiment-classification-reviews-with-drift
size_categories:
- 10K<n<100K
task_categories:
- text-classification
task_ids:
- sentiment-classification
---
# Dataset Card for `reviews_with_drift`
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [language](#language)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
### Dataset Summary
This dataset was crafted to be used in our tutorial [Link to the tutorial when ready]. It consists on a large Movie Review Dataset mixed with some reviews from a Hotel Review Dataset. The training/validation set are purely obtained from the Movie Review Dataset while the production set is mixed. Some other features have been added (`age`, `gender`, `context`) as well as a made up timestamp `prediction_ts` of when the inference took place.
### Supported Tasks and Leaderboards
`text-classification`, `sentiment-classification`: The dataset is mainly used for text classification: given the text, predict the sentiment (positive or negative).
### language
Text is mainly written in english.
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
[More Information Needed]
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
[More Information Needed]
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@fjcasti1](https://github.com/fjcasti1) for adding this dataset. |
vernadankers | null | """
_DESCRIPTION = | """
class Config(datasets.BuilderConfig): | false | 1 | false | vernadankers/sst5_bcm | 2022-10-19T15:20:26.000Z | null | false | b233bff091aab43e56a4e4da0e5614d7e276d56c | [] | [] | https://huggingface.co/datasets/vernadankers/sst5_bcm/resolve/main/README.md | ---
viewer: true
---
# Dataset Card for SST5-BCM
## Dataset Description
- **Repository:** https://github.com/vernadankers/bottleneck_compositionality_metric
- **Paper:** [Needs More Information]
- **Point of Contact:** [Verna Dankers](mailto:vernadankers@gmail.com)
### Dataset Summary
[Needs More Information]
### Supported Tasks and Leaderboards
- `sentiment-analysis`
### Languages
English
## Dataset Structure
### Data Instances
[Needs More Information]
### Data Fields
[Needs More Information]
### Data Splits
[Needs More Information]
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Licensing Information
[Needs More Information]
### Citation Information
[Needs More Information] |
davanstrien | null | null | null | false | 13 | false | davanstrien/loc_maps_sample | 2022-10-19T15:07:38.000Z | null | true | 0c8a37d37eeb04d32b630cdb64bb663bad942eba | [] | [] | https://huggingface.co/datasets/davanstrien/loc_maps_sample/resolve/main/README.md | |
takiholadi | null | null | null | false | 29 | false | takiholadi/kill-me-please-dataset | 2022-10-19T15:35:00.000Z | null | false | eb35cd73924b3b1440ca251b8c27f9503000d7d0 | [] | [
"annotations_creators:no-annotation",
"language_creators:found",
"language:ru",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"tags:stories",
"tags:website",
"task_categories:text-generation",
"task_categories:text-classification"
] | https://huggingface.co/datasets/takiholadi/kill-me-please-dataset/resolve/main/README.md | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- ru
multilinguality:
- monolingual
pretty_name: Kill-Me-Please Dataset
size_categories:
- 10K<n<100K
source_datasets:
- original
tags:
- stories
- website
task_categories:
- text-generation
- text-classification
---
# Dataset Card for Kill-Me-Please Dataset
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
## Dataset Description
- **Repository:** [github pet project repo](https://github.com/takiholadi/generative-kill-me-please)
### Dataset Summary
It is an Russian-language dataset containing just over 30k unique stories as written as users of https://killpls.me as of period from March 2009 to October 2022. This resource was blocked by Roskomnadzor so consider text-generation task if you want more stories.
### Languages
ru-RU
## Dataset Structure
### Data Instances
Here is an example of instance:
```
{'text': 'По глупости удалил всю 10 летнюю базу. Восстановлению не подлежит. Мне конец. КМП!'
'tags': 'техника'
'votes': 2914
'url': 'https://killpls.me/story/616'
'datetime': '4 июля 2009, 23:20'}
```
### Data Fields
- `text`: a string containing the body of the story
- `tags`: a string containing a comma-separated tags in a multi-label setup, fullset of tags (except of one empty-tagged record): `внешность`, `деньги`, `друзья`, `здоровье`, `отношения`, `работа`, `разное`, `родители`, `секс`, `семья`, `техника`, `учеба`
- `votes`: an integer sum of upvotes/downvotes
- `url`: a string containing the url where the story was web-scraped from
- `datetime`: a string containing with the datetime the story was written
### Data Splits
The has 2 multi-label stratified splits: train and test.
| Dataset Split | Number of Instances in Split |
| ------------- | ------------------------------------------- |
| Train | 27,321 |
| Test | 2,772 |
|
Norod78 | null | null | null | false | 5 | false | Norod78/MuppetFaces | 2022-10-19T14:45:17.000Z | null | false | c680d3168b82af06fb55adc58c07ac8dd4676473 | [] | [
"task_categories:image-classification"
] | https://huggingface.co/datasets/Norod78/MuppetFaces/resolve/main/README.md | ---
task_categories:
- image-classification
---
# AutoTrain Dataset for project: swin-muppet
## Dataset Description
This dataset has been automatically processed by AutoTrain for project swin-muppet.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"image": "<286x286 RGB PIL image>",
"target": 7
},
{
"image": "<169x170 RGB PIL image>",
"target": 13
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"image": "Image(decode=True, id=None)",
"target": "ClassLabel(num_classes=24, names=['Animal', 'Beaker', 'Bert', 'BigBird', 'Bunsen', 'Camilla', 'CookieMonster', 'Elmo', 'Ernie', 'Floyd', 'Fozzie', 'Gonzo', 'Grover', 'Kermit', 'Oscar', 'Pepe', 'Piggy', 'Rowlf', 'Scooter', 'Statler', 'SwedishChef', 'TheCount', 'Waldorf', 'Zoot'], id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 599 |
| valid | 162 |
|
XquanL | null | null | null | false | 1 | false | XquanL/702 | 2022-10-19T14:40:45.000Z | null | false | 231dc007d72f44e50ceb8dd7f15895a8c9cdf1ab | [] | [
"license:bsd"
] | https://huggingface.co/datasets/XquanL/702/resolve/main/README.md | ---
license: bsd
---
|
helena-balabin | null | null | null | false | 37 | false | helena-balabin/pereira_fMRI_passages | 2022-10-19T15:06:55.000Z | null | false | c9c430fc986b19666f8c7152dc47b30200c19218 | [] | [] | https://huggingface.co/datasets/helena-balabin/pereira_fMRI_passages/resolve/main/README.md | ---
dataset_info:
features:
- name: language_lh
sequence:
sequence: float64
- name: language_rh
sequence:
sequence: float64
- name: vision_body
sequence:
sequence: float64
- name: vision_face
sequence:
sequence: float64
- name: vision_object
sequence:
sequence: float64
- name: vision_scene
sequence:
sequence: float64
- name: vision
sequence:
sequence: float64
- name: dmn
sequence:
sequence: float64
- name: task
sequence:
sequence: float64
- name: all
sequence:
sequence: float64
- name: paragraphs
sequence: string
splits:
- name: train
num_bytes: 1649447464
num_examples: 8
download_size: 1658802762
dataset_size: 1649447464
---
# Dataset Card for "pereira_fMRI_passages"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
tomasg25 | null | @misc{Goldsack_2022,
doi = {10.48550/ARXIV.2210.09932},
url = {https://arxiv.org/abs/2210.09932},
author = {Goldsack, Tomas and Zhang, Zhihao and Lin, Chenghua and Scarton, Carolina},
title = {Making Science Simple: Corpora for the Lay Summarisation of Scientific Literature},
publisher = {arXiv},
year = {2022},
copyright = {arXiv.org perpetual, non-exclusive license}
} | This repository contains the PLOS and eLife datasets, introduced in the EMNLP 2022 paper "[Making Science Simple: Corpora for the Lay Summarisation of Scientific Literature
](https://arxiv.org/abs/2210.09932)".
Each dataset contains full biomedical research articles paired with expert-written lay summaries (i.e., non-technical summaries). PLOS articles are derived from various journals published by [the Public Library of Science (PLOS)](https://plos.org/), whereas eLife articles are derived from the [eLife](https://elifesciences.org/) journal. More details/anlaysis on the content of each dataset are provided in the paper.
Both "elife" and "plos" have 6 features:
- "article": the body of the document (including the abstract), sections seperated by "/n".
- "section_headings": the title of each section, seperated by "/n".
- "keywords": keywords describing the topic of the article, seperated by "/n".
- "title" : the title of the article.
- "year" : the year the article was published.
- "summary": the lay summary of the document. | false | 228 | false | tomasg25/scientific_lay_summarisation | 2022-10-26T11:11:33.000Z | null | false | 159aa1a67eac7dd85527e218b2e68a30ebbc2ccd | [] | [
"arxiv:2210.09932",
"annotations_creators:found",
"language:en",
"language_creators:found",
"license:unknown",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"size_categories:1K<n<10K",
"source_datasets:original",
"tags:abstractive-summarization",
"tags:scientific-papers",
"tags:... | https://huggingface.co/datasets/tomasg25/scientific_lay_summarisation/resolve/main/README.md | ---
annotations_creators:
- found
language:
- en
language_creators:
- found
license:
- unknown
multilinguality:
- monolingual
pretty_name: ScientificLaySummarisation
size_categories:
- 10K<n<100K
- 1K<n<10K
source_datasets:
- original
tags:
- abstractive-summarization
- scientific-papers
- lay-summarization
- PLOS
- eLife
task_categories:
- summarization
task_ids: []
---
# Dataset Card for "scientific_lay_summarisation"
- **Repository:** https://github.com/TGoldsack1/Corpora_for_Lay_Summarisation
- **Paper:** [Making Science Simple: Corpora for the Lay Summarisation of Scientific Literature](https://arxiv.org/abs/2210.09932)
- **Size of downloaded dataset files:** 850.44 MB
- **Size of the generated dataset:** 1.32 GB
- **Total amount of disk used:** 2.17 GB
### Dataset Summary
This repository contains the PLOS and eLife datasets, introduced in the EMNLP 2022 paper "[Making Science Simple: Corpora for the Lay Summarisation of Scientific Literature
](https://arxiv.org/abs/2210.09932)" .
Each dataset contains full biomedical research articles paired with expert-written lay summaries (i.e., non-technical summaries). PLOS articles are derived from various journals published by [the Public Library of Science (PLOS)](https://plos.org/), whereas eLife articles are derived from the [eLife](https://elifesciences.org/) journal. More details/analyses on the content of each dataset are provided in the paper.
Both "elife" and "plos" have 6 features:
- "article": the body of the document (including the abstract), sections separated by "/n".
- "section_headings": the title of each section, separated by "/n".
- "keywords": keywords describing the topic of the article, separated by "/n".
- "title": the title of the article.
- "year": the year the article was published.
- "summary": the lay summary of the document.
**Note:** The format of both datasets differs from that used in the original repository (given above) in order to make them compatible with the `run_summarization.py` script of Transformers. Specifically, sentence tokenization is removed via " ".join(text), and the abstract and article sections, previously lists of sentences, are combined into a single `string` feature ("article") with each section separated by "\n". For the sentence-tokenized version of the dataset, please use the original git repository.
### Supported Tasks and Leaderboards
Papers with code - [PLOS](https://paperswithcode.com/sota/lay-summarization-on-plos) and [eLife](https://paperswithcode.com/sota/lay-summarization-on-elife).
### Languages
English
## Dataset Structure
### Data Instances
#### plos
- **Size of downloaded dataset files:** 425.22 MB
- **Size of the generated dataset:** 1.05 GB
- **Total amount of disk used:** 1.47 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"summary": "In the kidney , structures known as nephrons are responsible for collecting metabolic waste . Nephrons are composed of a ...",
"article": "Kidney function depends on the nephron , which comprises a 'blood filter , a tubule that is subdivided into functionally ...",
"section_headings": "Abstract\nIntroduction\nResults\nDiscussion\nMaterials and Methods'",
"keywords": "developmental biology\ndanio (zebrafish)\nvertebrates\nteleost fishes\nnephrology",
"title": "The cdx Genes and Retinoic Acid Control the Positioning and Segmentation of the Zebrafish Pronephros",
"year": "2007"
}
```
#### elife
- **Size of downloaded dataset files:** 425.22 MB
- **Size of the generated dataset:** 275.99 MB
- **Total amount of disk used:** 1.47 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"summary": "In the USA , more deaths happen in the winter than the summer . But when deaths occur varies greatly by sex , age , cause of ...",
"article": "In temperate climates , winter deaths exceed summer ones . However , there is limited information on the timing and the ...",
"section_headings": "Abstract\nIntroduction\nResults\nDiscussion\nMaterials and methods",
"keywords": "epidemiology and global health",
"title": "National and regional seasonal dynamics of all-cause and cause-specific mortality in the USA from 1980 to 2016",
"year": "2018"
}
```
### Data Fields
The data fields are the same among all splits.
#### plos
- `article`: a `string` feature.
- `section_headings`: a `string` feature.
- `keywords`: a `string` feature.
- `title` : a `string` feature.
- `year` : a `string` feature.
- `summary`: a `string` feature.
#### elife
- `article`: a `string` feature.
- `section_headings`: a `string` feature.
- `keywords`: a `string` feature.
- `title` : a `string` feature.
- `year` : a `string` feature.
- `summary`: a `string` feature.
### Data Splits
| name |train |validation|test|
|------|-----:|---------:|---:|
|plos | 24773| 1376|1376|
|elife | 4346| 241| 241|
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```
"Making Science Simple: Corpora for the Lay Summarisation of Scientific Literature"
Tomas Goldsack, Zhihao Zhang, Chenghua Lin, Carolina Scarton
EMNLP 2022
``` |
helena-balabin | null | null | null | false | 31 | false | helena-balabin/pereira_fMRI_sentences | 2022-10-19T16:07:42.000Z | null | false | 38ecb87f301650f2c0bac5ffca47bf6c137398d3 | [] | [] | https://huggingface.co/datasets/helena-balabin/pereira_fMRI_sentences/resolve/main/README.md | ---
dataset_info:
features:
- name: language_lh
sequence:
sequence: float64
- name: language_rh
sequence:
sequence: float64
- name: vision_body
sequence:
sequence: float64
- name: vision_face
sequence:
sequence: float64
- name: vision_object
sequence:
sequence: float64
- name: vision_scene
sequence:
sequence: float64
- name: vision
sequence:
sequence: float64
- name: dmn
sequence:
sequence: float64
- name: task
sequence:
sequence: float64
- name: all
sequence:
sequence: float64
- name: sentences
sequence: string
splits:
- name: train
num_bytes: 6597174480
num_examples: 8
download_size: 6598415137
dataset_size: 6597174480
---
# Dataset Card for "pereira_fMRI_sentences"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
mbazaNLP | null | null | null | false | null | false | mbazaNLP/Kinyarwanda_English_parallel_dataset | 2022-10-19T15:54:19.000Z | null | false | e3b4939df19247dbb97e4e9b61d628c69d270ea2 | [] | [
"license:cc-by-4.0"
] | https://huggingface.co/datasets/mbazaNLP/Kinyarwanda_English_parallel_dataset/resolve/main/README.md | ---
license: cc-by-4.0
---
## Kinyarwanda-English parallel text
This dataset contains 55,000 Kinyarwanda-English sentence pairs, obtained by scraping web data from religious sources such as:
[Bible](https://servervideos.hopto.org/XMLBible/EnglishKJBible.xml)
[Quran](https://quranenc.com/en/home/download/csv/kinyarwanda_assoc)
This dataset has not been curated only cleaned.
|
andrewkroening | null | null | null | false | 303 | false | andrewkroening/538-NBA-Historical-Raptor | 2022-11-06T22:14:56.000Z | null | false | e1378170980e1e93de987c2f4976dc3dd2183975 | [] | [
"license:cc"
] | https://huggingface.co/datasets/andrewkroening/538-NBA-Historical-Raptor/resolve/main/README.md | ---
license: cc
---
## Dataset Overview
### Intro
This dataset was downloaded from the good folks at fivethirtyeight. You can find the original (or in the future, updated) versions of this and several similar datasets at [this GitHub link.](https://github.com/fivethirtyeight/data/tree/master/nba-raptor)
### Data layout
Here are the columns in this dataset, which contains data on every NBA player, broken out by season, since the 1976 NBA-ABA merger:
Column | Description
-------|---------------
`player_name` | Player name
`player_id` | Basketball-Reference.com player ID
`season` | Season
`season_type` | Regular season (RS) or playoff (PO)
`team` | Basketball-Reference ID of team
`poss` | Possessions played
`mp` | Minutes played
`raptor_box_offense` | Points above average per 100 possessions added by player on offense, based only on box score estimate
`raptor_box_defense` | Points above average per 100 possessions added by player on defense, based only on box score estimate
`raptor_box_total` | Points above average per 100 possessions added by player, based only on box score estimate
`raptor_onoff_offense` | Points above average per 100 possessions added by player on offense, based only on plus-minus data
`raptor_onoff_defense` | Points above average per 100 possessions added by player on defense, based only on plus-minus data
`raptor_onoff_total` | Points above average per 100 possessions added by player, based only on plus-minus data
`raptor_offense` | Points above average per 100 possessions added by player on offense, using both box and on-off components
`raptor_defense` | Points above average per 100 possessions added by player on defense, using both box and on-off components
`raptor_total` | Points above average per 100 possessions added by player on both offense and defense, using both box and on-off components
`war_total` | Wins Above Replacement between regular season and playoffs
`war_reg_season` | Wins Above Replacement for regular season
`war_playoffs` | Wins Above Replacement for playoffs
`predator_offense` | Predictive points above average per 100 possessions added by player on offense
`predator_defense` | Predictive points above average per 100 possessions added by player on defense
`predator_total` | Predictive points above average per 100 possessions added by player on both offense and defense
`pace_impact` | Player impact on team possessions per 48 minutes
### More information
This dataset was put together for Hugging Face by this guy: [Andrew Kroening](https://github.com/andrewkroening)
He was building some kind of a silly tool using this dataset. It's an NBA WAR Predictor tool, and you can find the Gradio interface [here.](https://huggingface.co/spaces/andrewkroening/nba-war-predictor) The GitHub repo can be found [here.](https://github.com/andrewkroening/nba-war-predictor-tool) |
pcuenq | null | null | null | false | 10 | false | pcuenq/CelebA-faces-cropped-128-encoded | 2022-10-19T17:09:12.000Z | null | false | 7deafb29500efc4a31d362844b350e8f607b2f62 | [] | [] | https://huggingface.co/datasets/pcuenq/CelebA-faces-cropped-128-encoded/resolve/main/README.md | ---
dataset_info:
features:
- name: latents
sequence: float32
splits:
- name: test
num_bytes: 41533000
num_examples: 10130
- name: train
num_bytes: 789122900
num_examples: 192469
download_size: 843386957
dataset_size: 830655900
---
# Dataset Card for "CelebA-faces-cropped-128-encoded"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
devzohaib | null | null | null | false | null | false | devzohaib/roman-urdu-HateSpeech | 2022-10-19T17:33:53.000Z | null | false | da6d341d76d82dd5e8c624ad3f2fbc811d6a41d8 | [] | [
"license:afl-3.0"
] | https://huggingface.co/datasets/devzohaib/roman-urdu-HateSpeech/resolve/main/README.md | ---
license: afl-3.0
---
|
jeanpat | null | null | null | false | null | false | jeanpat/SmallOverlapChrom-COCO125 | 2022-10-19T18:07:53.000Z | null | false | 1549a1e312c56f0371d15c06a843c064b10ea3fb | [] | [
"license:cc-by-nc-4.0"
] | https://huggingface.co/datasets/jeanpat/SmallOverlapChrom-COCO125/resolve/main/README.md | ---
license: cc-by-nc-4.0
---
|
estebancrop | null | null | null | false | null | false | estebancrop/pablolobato | 2022-10-19T19:29:01.000Z | null | false | 2325be7e710841e0422a31a5164b4c7bc0207f53 | [] | [
"license:unknown"
] | https://huggingface.co/datasets/estebancrop/pablolobato/resolve/main/README.md | ---
license: unknown
---
|
estebancrop | null | null | null | false | null | false | estebancrop/pablolobato2 | 2022-10-19T19:38:05.000Z | null | false | 6ab942d3ac64c7cf70f68a02c7fb4bd84ac3f5e0 | [] | [
"license:openrail"
] | https://huggingface.co/datasets/estebancrop/pablolobato2/resolve/main/README.md | ---
license: openrail
---
|
estebancrop | null | null | null | false | null | false | estebancrop/pablolobato3 | 2022-10-19T19:46:50.000Z | null | false | 64f1d7d284def2cbdadacda98474b668db65952b | [] | [
"license:openrail"
] | https://huggingface.co/datasets/estebancrop/pablolobato3/resolve/main/README.md | ---
license: openrail
---
|
estebancrop | null | null | null | false | null | false | estebancrop/estebancrop | 2022-10-19T20:03:41.000Z | null | false | cc10e377aa7810772ed1df838ed67a7b843132df | [] | [
"license:openrail"
] | https://huggingface.co/datasets/estebancrop/estebancrop/resolve/main/README.md | ---
license: openrail
---
|
alaa2111 | null | null | null | false | null | false | alaa2111/new_one | 2022-10-19T21:43:11.000Z | null | false | 40d56fd9ee7d8dfd327bf4ff2d61d3cf72858c33 | [] | [
"license:openrail"
] | https://huggingface.co/datasets/alaa2111/new_one/resolve/main/README.md | ---
license: openrail
---
|
SALT-NLP | null | null | null | false | null | false | SALT-NLP/FLUE-FiQA | 2022-10-21T17:29:14.000Z | null | false | 6607cbb5129ed0db4817bbfb3b1e65ff7db9a792 | [] | [
"license:cc-by-3.0"
] | https://huggingface.co/datasets/SALT-NLP/FLUE-FiQA/resolve/main/README.md | ---
license: cc-by-3.0
---
## Dataset Summary
- **Homepage:** https://sites.google.com/view/salt-nlp-flang
- **Models:** https://huggingface.co/SALT-NLP/FLANG-BERT
- **Repository:** https://github.com/SALT-NLP/FLANG
## FLUE
FLUE (Financial Language Understanding Evaluation) is a comprehensive and heterogeneous benchmark that has been built from 5 diverse financial domain specific datasets.
Sentiment Classification: [Financial PhraseBank](https://huggingface.co/datasets/financial_phrasebank)\
Sentiment Analysis, Question Answering: [FiQA 2018](https://huggingface.co/datasets/SALT-NLP/FLUE-FiQA)\
New Headlines Classification: [Headlines](https://www.kaggle.com/datasets/daittan/gold-commodity-news-and-dimensions)\
Named Entity Recognition: [NER](https://huggingface.co/datasets/SALT-NLP/FLUE-NER)\
Structure Boundary Detection: [FinSBD3](https://sites.google.com/nlg.csie.ntu.edu.tw/finweb2021/shared-task-finsbd-3)
## Dataset Structure
The FiQA dataset has a corpus, queries and qrels (relevance judgments file). They are in the following format:
- `corpus` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with three fields `_id` with unique document identifier, `title` with document title (optional) and `text` with document paragraph or passage. For example: `{"_id": "doc1", "title": "Albert Einstein", "text": "Albert Einstein was a German-born...."}`
- `queries` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with two fields `_id` with unique query identifier and `text` with query text. For example: `{"_id": "q1", "text": "Who developed the mass-energy equivalence formula?"}`
- `qrels` file: a `.tsv` file (tab-seperated) that contains three columns, i.e. the `query-id`, `corpus-id` and `score` in this order. Keep 1st row as header. For example: `q1 doc1 1`
|
relbert | null | @inproceedings{jurgens-etal-2012-semeval,
title = "{S}em{E}val-2012 Task 2: Measuring Degrees of Relational Similarity",
author = "Jurgens, David and
Mohammad, Saif and
Turney, Peter and
Holyoak, Keith",
booktitle = "*{SEM} 2012: The First Joint Conference on Lexical and Computational Semantics {--} Volume 1: Proceedings of the main conference and the shared task, and Volume 2: Proceedings of the Sixth International Workshop on Semantic Evaluation ({S}em{E}val 2012)",
month = "7-8 " # jun,
year = "2012",
address = "Montr{\'e}al, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/S12-1047",
pages = "356--364",
} | [SemEVAL 2012 task 2: Relational Similarity](https://aclanthology.org/S12-1047/) | false | 49 | false | relbert/semeval2012_relational_similarity_v3 | 2022-10-21T10:17:28.000Z | null | false | 5c5c1ed77208cde12cf6dbd819102668587a5fb5 | [] | [
"language:en",
"license:other",
"multilinguality:monolingual",
"size_categories:1K<n<10K"
] | https://huggingface.co/datasets/relbert/semeval2012_relational_similarity_v3/resolve/main/README.md | ---
language:
- en
license:
- other
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
pretty_name: SemEval2012 task 2 Relational Similarity
---
# Dataset Card for "relbert/semeval2012_relational_similarity_v3"
## Dataset Description
- **Repository:** [RelBERT](https://github.com/asahi417/relbert)
- **Paper:** [https://aclanthology.org/S12-1047/](https://aclanthology.org/S12-1047/)
- **Dataset:** SemEval2012: Relational Similarity
### Dataset Summary
***IMPORTANT***: This is the same dataset as [relbert/semeval2012_relational_similarity](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity),
but with a different dataset construction.
Relational similarity dataset from [SemEval2012 task 2](https://aclanthology.org/S12-1047/), compiled to fine-tune [RelBERT](https://github.com/asahi417/relbert) model.
The dataset contains a list of positive and negative word pair from 89 pre-defined relations.
The relation types are constructed on top of following 10 parent relation types.
```shell
{
1: "Class Inclusion", # Hypernym
2: "Part-Whole", # Meronym, Substance Meronym
3: "Similar", # Synonym, Co-hypornym
4: "Contrast", # Antonym
5: "Attribute", # Attribute, Event
6: "Non Attribute",
7: "Case Relation",
8: "Cause-Purpose",
9: "Space-Time",
10: "Representation"
}
```
Each of the parent relation is further grouped into child relation types where the definition can be found [here](https://drive.google.com/file/d/0BzcZKTSeYL8VenY0QkVpZVpxYnc/view?resourcekey=0-ZP-UARfJj39PcLroibHPHw).
## Dataset Structure
### Data Instances
An example of `train` looks as follows.
```
{
'relation_type': '8d',
'positives': [ [ "breathe", "live" ], [ "study", "learn" ], [ "speak", "communicate" ], ... ]
'negatives': [ [ "starving", "hungry" ], [ "clean", "bathe" ], [ "hungry", "starving" ], ... ]
}
```
### Data Splits
| name |train|validation|
|---------|----:|---------:|
|semeval2012_relational_similarity| 89 | 89|
### Number of Positive/Negative Word-pairs in each Split
| | positives | negatives |
|:--------------------------------------------|------------:|------------:|
| ('1', 'parent', 'train') | 110 | 680 |
| ('1', 'parent', 'validation') | 129 | 760 |
| ('10', 'parent', 'train') | 60 | 730 |
| ('10', 'parent', 'validation') | 66 | 823 |
| ('10a', 'child', 'train') | 10 | 780 |
| ('10a', 'child', 'validation') | 14 | 875 |
| ('10a', 'child_prototypical', 'train') | 39 | 506 |
| ('10a', 'child_prototypical', 'validation') | 63 | 938 |
| ('10b', 'child', 'train') | 10 | 780 |
| ('10b', 'child', 'validation') | 13 | 876 |
| ('10b', 'child_prototypical', 'train') | 39 | 428 |
| ('10b', 'child_prototypical', 'validation') | 57 | 707 |
| ('10c', 'child', 'train') | 10 | 780 |
| ('10c', 'child', 'validation') | 11 | 878 |
| ('10c', 'child_prototypical', 'train') | 39 | 545 |
| ('10c', 'child_prototypical', 'validation') | 45 | 650 |
| ('10d', 'child', 'train') | 10 | 780 |
| ('10d', 'child', 'validation') | 10 | 879 |
| ('10d', 'child_prototypical', 'train') | 39 | 506 |
| ('10d', 'child_prototypical', 'validation') | 39 | 506 |
| ('10e', 'child', 'train') | 10 | 780 |
| ('10e', 'child', 'validation') | 8 | 881 |
| ('10e', 'child_prototypical', 'train') | 39 | 350 |
| ('10e', 'child_prototypical', 'validation') | 27 | 218 |
| ('10f', 'child', 'train') | 10 | 780 |
| ('10f', 'child', 'validation') | 10 | 879 |
| ('10f', 'child_prototypical', 'train') | 39 | 506 |
| ('10f', 'child_prototypical', 'validation') | 39 | 506 |
| ('1a', 'child', 'train') | 10 | 780 |
| ('1a', 'child', 'validation') | 14 | 875 |
| ('1a', 'child_prototypical', 'train') | 39 | 428 |
| ('1a', 'child_prototypical', 'validation') | 63 | 812 |
| ('1b', 'child', 'train') | 10 | 780 |
| ('1b', 'child', 'validation') | 14 | 875 |
| ('1b', 'child_prototypical', 'train') | 39 | 428 |
| ('1b', 'child_prototypical', 'validation') | 63 | 812 |
| ('1c', 'child', 'train') | 10 | 780 |
| ('1c', 'child', 'validation') | 11 | 878 |
| ('1c', 'child_prototypical', 'train') | 39 | 545 |
| ('1c', 'child_prototypical', 'validation') | 45 | 650 |
| ('1d', 'child', 'train') | 10 | 780 |
| ('1d', 'child', 'validation') | 16 | 873 |
| ('1d', 'child_prototypical', 'train') | 39 | 428 |
| ('1d', 'child_prototypical', 'validation') | 75 | 1040 |
| ('1e', 'child', 'train') | 10 | 780 |
| ('1e', 'child', 'validation') | 8 | 881 |
| ('1e', 'child_prototypical', 'train') | 39 | 311 |
| ('1e', 'child_prototypical', 'validation') | 27 | 191 |
| ('2', 'parent', 'train') | 100 | 690 |
| ('2', 'parent', 'validation') | 117 | 772 |
| ('2a', 'child', 'train') | 10 | 780 |
| ('2a', 'child', 'validation') | 15 | 874 |
| ('2a', 'child_prototypical', 'train') | 39 | 506 |
| ('2a', 'child_prototypical', 'validation') | 69 | 1061 |
| ('2b', 'child', 'train') | 10 | 780 |
| ('2b', 'child', 'validation') | 11 | 878 |
| ('2b', 'child_prototypical', 'train') | 39 | 389 |
| ('2b', 'child_prototypical', 'validation') | 45 | 470 |
| ('2c', 'child', 'train') | 10 | 780 |
| ('2c', 'child', 'validation') | 13 | 876 |
| ('2c', 'child_prototypical', 'train') | 39 | 467 |
| ('2c', 'child_prototypical', 'validation') | 57 | 764 |
| ('2d', 'child', 'train') | 10 | 780 |
| ('2d', 'child', 'validation') | 10 | 879 |
| ('2d', 'child_prototypical', 'train') | 39 | 467 |
| ('2d', 'child_prototypical', 'validation') | 39 | 467 |
| ('2e', 'child', 'train') | 10 | 780 |
| ('2e', 'child', 'validation') | 11 | 878 |
| ('2e', 'child_prototypical', 'train') | 39 | 506 |
| ('2e', 'child_prototypical', 'validation') | 45 | 605 |
| ('2f', 'child', 'train') | 10 | 780 |
| ('2f', 'child', 'validation') | 11 | 878 |
| ('2f', 'child_prototypical', 'train') | 39 | 623 |
| ('2f', 'child_prototypical', 'validation') | 45 | 740 |
| ('2g', 'child', 'train') | 10 | 780 |
| ('2g', 'child', 'validation') | 16 | 873 |
| ('2g', 'child_prototypical', 'train') | 39 | 389 |
| ('2g', 'child_prototypical', 'validation') | 75 | 965 |
| ('2h', 'child', 'train') | 10 | 780 |
| ('2h', 'child', 'validation') | 11 | 878 |
| ('2h', 'child_prototypical', 'train') | 39 | 506 |
| ('2h', 'child_prototypical', 'validation') | 45 | 605 |
| ('2i', 'child', 'train') | 10 | 780 |
| ('2i', 'child', 'validation') | 9 | 880 |
| ('2i', 'child_prototypical', 'train') | 39 | 545 |
| ('2i', 'child_prototypical', 'validation') | 33 | 446 |
| ('2j', 'child', 'train') | 10 | 780 |
| ('2j', 'child', 'validation') | 10 | 879 |
| ('2j', 'child_prototypical', 'train') | 39 | 584 |
| ('2j', 'child_prototypical', 'validation') | 39 | 584 |
| ('3', 'parent', 'train') | 80 | 710 |
| ('3', 'parent', 'validation') | 80 | 809 |
| ('3a', 'child', 'train') | 10 | 780 |
| ('3a', 'child', 'validation') | 11 | 878 |
| ('3a', 'child_prototypical', 'train') | 39 | 506 |
| ('3a', 'child_prototypical', 'validation') | 45 | 605 |
| ('3b', 'child', 'train') | 10 | 780 |
| ('3b', 'child', 'validation') | 11 | 878 |
| ('3b', 'child_prototypical', 'train') | 39 | 623 |
| ('3b', 'child_prototypical', 'validation') | 45 | 740 |
| ('3c', 'child', 'train') | 10 | 780 |
| ('3c', 'child', 'validation') | 12 | 877 |
| ('3c', 'child_prototypical', 'train') | 39 | 467 |
| ('3c', 'child_prototypical', 'validation') | 51 | 659 |
| ('3d', 'child', 'train') | 10 | 780 |
| ('3d', 'child', 'validation') | 14 | 875 |
| ('3d', 'child_prototypical', 'train') | 39 | 467 |
| ('3d', 'child_prototypical', 'validation') | 63 | 875 |
| ('3e', 'child', 'train') | 10 | 780 |
| ('3e', 'child', 'validation') | 5 | 884 |
| ('3e', 'child_prototypical', 'train') | 39 | 623 |
| ('3e', 'child_prototypical', 'validation') | 10 | 140 |
| ('3f', 'child', 'train') | 10 | 780 |
| ('3f', 'child', 'validation') | 11 | 878 |
| ('3f', 'child_prototypical', 'train') | 39 | 662 |
| ('3f', 'child_prototypical', 'validation') | 45 | 785 |
| ('3g', 'child', 'train') | 10 | 780 |
| ('3g', 'child', 'validation') | 6 | 883 |
| ('3g', 'child_prototypical', 'train') | 39 | 584 |
| ('3g', 'child_prototypical', 'validation') | 15 | 200 |
| ('3h', 'child', 'train') | 10 | 780 |
| ('3h', 'child', 'validation') | 10 | 879 |
| ('3h', 'child_prototypical', 'train') | 39 | 584 |
| ('3h', 'child_prototypical', 'validation') | 39 | 584 |
| ('4', 'parent', 'train') | 80 | 710 |
| ('4', 'parent', 'validation') | 82 | 807 |
| ('4a', 'child', 'train') | 10 | 780 |
| ('4a', 'child', 'validation') | 11 | 878 |
| ('4a', 'child_prototypical', 'train') | 39 | 623 |
| ('4a', 'child_prototypical', 'validation') | 45 | 740 |
| ('4b', 'child', 'train') | 10 | 780 |
| ('4b', 'child', 'validation') | 7 | 882 |
| ('4b', 'child_prototypical', 'train') | 39 | 428 |
| ('4b', 'child_prototypical', 'validation') | 21 | 203 |
| ('4c', 'child', 'train') | 10 | 780 |
| ('4c', 'child', 'validation') | 12 | 877 |
| ('4c', 'child_prototypical', 'train') | 39 | 545 |
| ('4c', 'child_prototypical', 'validation') | 51 | 761 |
| ('4d', 'child', 'train') | 10 | 780 |
| ('4d', 'child', 'validation') | 4 | 885 |
| ('4d', 'child_prototypical', 'train') | 39 | 389 |
| ('4d', 'child_prototypical', 'validation') | 6 | 46 |
| ('4e', 'child', 'train') | 10 | 780 |
| ('4e', 'child', 'validation') | 12 | 877 |
| ('4e', 'child_prototypical', 'train') | 39 | 623 |
| ('4e', 'child_prototypical', 'validation') | 51 | 863 |
| ('4f', 'child', 'train') | 10 | 780 |
| ('4f', 'child', 'validation') | 9 | 880 |
| ('4f', 'child_prototypical', 'train') | 39 | 623 |
| ('4f', 'child_prototypical', 'validation') | 33 | 512 |
| ('4g', 'child', 'train') | 10 | 780 |
| ('4g', 'child', 'validation') | 15 | 874 |
| ('4g', 'child_prototypical', 'train') | 39 | 467 |
| ('4g', 'child_prototypical', 'validation') | 69 | 992 |
| ('4h', 'child', 'train') | 10 | 780 |
| ('4h', 'child', 'validation') | 12 | 877 |
| ('4h', 'child_prototypical', 'train') | 39 | 584 |
| ('4h', 'child_prototypical', 'validation') | 51 | 812 |
| ('5', 'parent', 'train') | 90 | 700 |
| ('5', 'parent', 'validation') | 105 | 784 |
| ('5a', 'child', 'train') | 10 | 780 |
| ('5a', 'child', 'validation') | 14 | 875 |
| ('5a', 'child_prototypical', 'train') | 39 | 467 |
| ('5a', 'child_prototypical', 'validation') | 63 | 875 |
| ('5b', 'child', 'train') | 10 | 780 |
| ('5b', 'child', 'validation') | 8 | 881 |
| ('5b', 'child_prototypical', 'train') | 39 | 584 |
| ('5b', 'child_prototypical', 'validation') | 27 | 380 |
| ('5c', 'child', 'train') | 10 | 780 |
| ('5c', 'child', 'validation') | 11 | 878 |
| ('5c', 'child_prototypical', 'train') | 39 | 506 |
| ('5c', 'child_prototypical', 'validation') | 45 | 605 |
| ('5d', 'child', 'train') | 10 | 780 |
| ('5d', 'child', 'validation') | 15 | 874 |
| ('5d', 'child_prototypical', 'train') | 39 | 428 |
| ('5d', 'child_prototypical', 'validation') | 69 | 923 |
| ('5e', 'child', 'train') | 10 | 780 |
| ('5e', 'child', 'validation') | 8 | 881 |
| ('5e', 'child_prototypical', 'train') | 39 | 584 |
| ('5e', 'child_prototypical', 'validation') | 27 | 380 |
| ('5f', 'child', 'train') | 10 | 780 |
| ('5f', 'child', 'validation') | 11 | 878 |
| ('5f', 'child_prototypical', 'train') | 39 | 584 |
| ('5f', 'child_prototypical', 'validation') | 45 | 695 |
| ('5g', 'child', 'train') | 10 | 780 |
| ('5g', 'child', 'validation') | 9 | 880 |
| ('5g', 'child_prototypical', 'train') | 39 | 623 |
| ('5g', 'child_prototypical', 'validation') | 33 | 512 |
| ('5h', 'child', 'train') | 10 | 780 |
| ('5h', 'child', 'validation') | 15 | 874 |
| ('5h', 'child_prototypical', 'train') | 39 | 545 |
| ('5h', 'child_prototypical', 'validation') | 69 | 1130 |
| ('5i', 'child', 'train') | 10 | 780 |
| ('5i', 'child', 'validation') | 14 | 875 |
| ('5i', 'child_prototypical', 'train') | 39 | 545 |
| ('5i', 'child_prototypical', 'validation') | 63 | 1001 |
| ('6', 'parent', 'train') | 80 | 710 |
| ('6', 'parent', 'validation') | 99 | 790 |
| ('6a', 'child', 'train') | 10 | 780 |
| ('6a', 'child', 'validation') | 15 | 874 |
| ('6a', 'child_prototypical', 'train') | 39 | 467 |
| ('6a', 'child_prototypical', 'validation') | 69 | 992 |
| ('6b', 'child', 'train') | 10 | 780 |
| ('6b', 'child', 'validation') | 11 | 878 |
| ('6b', 'child_prototypical', 'train') | 39 | 584 |
| ('6b', 'child_prototypical', 'validation') | 45 | 695 |
| ('6c', 'child', 'train') | 10 | 780 |
| ('6c', 'child', 'validation') | 13 | 876 |
| ('6c', 'child_prototypical', 'train') | 39 | 584 |
| ('6c', 'child_prototypical', 'validation') | 57 | 935 |
| ('6d', 'child', 'train') | 10 | 780 |
| ('6d', 'child', 'validation') | 10 | 879 |
| ('6d', 'child_prototypical', 'train') | 39 | 701 |
| ('6d', 'child_prototypical', 'validation') | 39 | 701 |
| ('6e', 'child', 'train') | 10 | 780 |
| ('6e', 'child', 'validation') | 11 | 878 |
| ('6e', 'child_prototypical', 'train') | 39 | 584 |
| ('6e', 'child_prototypical', 'validation') | 45 | 695 |
| ('6f', 'child', 'train') | 10 | 780 |
| ('6f', 'child', 'validation') | 12 | 877 |
| ('6f', 'child_prototypical', 'train') | 39 | 506 |
| ('6f', 'child_prototypical', 'validation') | 51 | 710 |
| ('6g', 'child', 'train') | 10 | 780 |
| ('6g', 'child', 'validation') | 12 | 877 |
| ('6g', 'child_prototypical', 'train') | 39 | 467 |
| ('6g', 'child_prototypical', 'validation') | 51 | 659 |
| ('6h', 'child', 'train') | 10 | 780 |
| ('6h', 'child', 'validation') | 15 | 874 |
| ('6h', 'child_prototypical', 'train') | 39 | 506 |
| ('6h', 'child_prototypical', 'validation') | 69 | 1061 |
| ('7', 'parent', 'train') | 80 | 710 |
| ('7', 'parent', 'validation') | 91 | 798 |
| ('7a', 'child', 'train') | 10 | 780 |
| ('7a', 'child', 'validation') | 14 | 875 |
| ('7a', 'child_prototypical', 'train') | 39 | 545 |
| ('7a', 'child_prototypical', 'validation') | 63 | 1001 |
| ('7b', 'child', 'train') | 10 | 780 |
| ('7b', 'child', 'validation') | 7 | 882 |
| ('7b', 'child_prototypical', 'train') | 39 | 389 |
| ('7b', 'child_prototypical', 'validation') | 21 | 182 |
| ('7c', 'child', 'train') | 10 | 780 |
| ('7c', 'child', 'validation') | 11 | 878 |
| ('7c', 'child_prototypical', 'train') | 39 | 428 |
| ('7c', 'child_prototypical', 'validation') | 45 | 515 |
| ('7d', 'child', 'train') | 10 | 780 |
| ('7d', 'child', 'validation') | 14 | 875 |
| ('7d', 'child_prototypical', 'train') | 39 | 545 |
| ('7d', 'child_prototypical', 'validation') | 63 | 1001 |
| ('7e', 'child', 'train') | 10 | 780 |
| ('7e', 'child', 'validation') | 10 | 879 |
| ('7e', 'child_prototypical', 'train') | 39 | 428 |
| ('7e', 'child_prototypical', 'validation') | 39 | 428 |
| ('7f', 'child', 'train') | 10 | 780 |
| ('7f', 'child', 'validation') | 12 | 877 |
| ('7f', 'child_prototypical', 'train') | 39 | 389 |
| ('7f', 'child_prototypical', 'validation') | 51 | 557 |
| ('7g', 'child', 'train') | 10 | 780 |
| ('7g', 'child', 'validation') | 9 | 880 |
| ('7g', 'child_prototypical', 'train') | 39 | 311 |
| ('7g', 'child_prototypical', 'validation') | 33 | 248 |
| ('7h', 'child', 'train') | 10 | 780 |
| ('7h', 'child', 'validation') | 14 | 875 |
| ('7h', 'child_prototypical', 'train') | 39 | 350 |
| ('7h', 'child_prototypical', 'validation') | 63 | 686 |
| ('8', 'parent', 'train') | 80 | 710 |
| ('8', 'parent', 'validation') | 90 | 799 |
| ('8a', 'child', 'train') | 10 | 780 |
| ('8a', 'child', 'validation') | 14 | 875 |
| ('8a', 'child_prototypical', 'train') | 39 | 428 |
| ('8a', 'child_prototypical', 'validation') | 63 | 812 |
| ('8b', 'child', 'train') | 10 | 780 |
| ('8b', 'child', 'validation') | 7 | 882 |
| ('8b', 'child_prototypical', 'train') | 39 | 584 |
| ('8b', 'child_prototypical', 'validation') | 21 | 287 |
| ('8c', 'child', 'train') | 10 | 780 |
| ('8c', 'child', 'validation') | 12 | 877 |
| ('8c', 'child_prototypical', 'train') | 39 | 389 |
| ('8c', 'child_prototypical', 'validation') | 51 | 557 |
| ('8d', 'child', 'train') | 10 | 780 |
| ('8d', 'child', 'validation') | 13 | 876 |
| ('8d', 'child_prototypical', 'train') | 39 | 389 |
| ('8d', 'child_prototypical', 'validation') | 57 | 650 |
| ('8e', 'child', 'train') | 10 | 780 |
| ('8e', 'child', 'validation') | 11 | 878 |
| ('8e', 'child_prototypical', 'train') | 39 | 389 |
| ('8e', 'child_prototypical', 'validation') | 45 | 470 |
| ('8f', 'child', 'train') | 10 | 780 |
| ('8f', 'child', 'validation') | 12 | 877 |
| ('8f', 'child_prototypical', 'train') | 39 | 428 |
| ('8f', 'child_prototypical', 'validation') | 51 | 608 |
| ('8g', 'child', 'train') | 10 | 780 |
| ('8g', 'child', 'validation') | 7 | 882 |
| ('8g', 'child_prototypical', 'train') | 39 | 272 |
| ('8g', 'child_prototypical', 'validation') | 21 | 119 |
| ('8h', 'child', 'train') | 10 | 780 |
| ('8h', 'child', 'validation') | 14 | 875 |
| ('8h', 'child_prototypical', 'train') | 39 | 467 |
| ('8h', 'child_prototypical', 'validation') | 63 | 875 |
| ('9', 'parent', 'train') | 90 | 700 |
| ('9', 'parent', 'validation') | 96 | 793 |
| ('9a', 'child', 'train') | 10 | 780 |
| ('9a', 'child', 'validation') | 14 | 875 |
| ('9a', 'child_prototypical', 'train') | 39 | 350 |
| ('9a', 'child_prototypical', 'validation') | 63 | 686 |
| ('9b', 'child', 'train') | 10 | 780 |
| ('9b', 'child', 'validation') | 12 | 877 |
| ('9b', 'child_prototypical', 'train') | 39 | 506 |
| ('9b', 'child_prototypical', 'validation') | 51 | 710 |
| ('9c', 'child', 'train') | 10 | 780 |
| ('9c', 'child', 'validation') | 7 | 882 |
| ('9c', 'child_prototypical', 'train') | 39 | 155 |
| ('9c', 'child_prototypical', 'validation') | 21 | 56 |
| ('9d', 'child', 'train') | 10 | 780 |
| ('9d', 'child', 'validation') | 9 | 880 |
| ('9d', 'child_prototypical', 'train') | 39 | 662 |
| ('9d', 'child_prototypical', 'validation') | 33 | 545 |
| ('9e', 'child', 'train') | 10 | 780 |
| ('9e', 'child', 'validation') | 8 | 881 |
| ('9e', 'child_prototypical', 'train') | 39 | 701 |
| ('9e', 'child_prototypical', 'validation') | 27 | 461 |
| ('9f', 'child', 'train') | 10 | 780 |
| ('9f', 'child', 'validation') | 10 | 879 |
| ('9f', 'child_prototypical', 'train') | 39 | 506 |
| ('9f', 'child_prototypical', 'validation') | 39 | 506 |
| ('9g', 'child', 'train') | 10 | 780 |
| ('9g', 'child', 'validation') | 14 | 875 |
| ('9g', 'child_prototypical', 'train') | 39 | 389 |
| ('9g', 'child_prototypical', 'validation') | 63 | 749 |
| ('9h', 'child', 'train') | 10 | 780 |
| ('9h', 'child', 'validation') | 13 | 876 |
| ('9h', 'child_prototypical', 'train') | 39 | 506 |
| ('9h', 'child_prototypical', 'validation') | 57 | 821 |
| ('9i', 'child', 'train') | 10 | 780 |
| ('9i', 'child', 'validation') | 9 | 880 |
| ('9i', 'child_prototypical', 'train') | 39 | 506 |
| ('9i', 'child_prototypical', 'validation') | 33 | 413 |
### Citation Information
```
@inproceedings{jurgens-etal-2012-semeval,
title = "{S}em{E}val-2012 Task 2: Measuring Degrees of Relational Similarity",
author = "Jurgens, David and
Mohammad, Saif and
Turney, Peter and
Holyoak, Keith",
booktitle = "*{SEM} 2012: The First Joint Conference on Lexical and Computational Semantics {--} Volume 1: Proceedings of the main conference and the shared task, and Volume 2: Proceedings of the Sixth International Workshop on Semantic Evaluation ({S}em{E}val 2012)",
month = "7-8 " # jun,
year = "2012",
address = "Montr{\'e}al, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/S12-1047",
pages = "356--364",
}
``` |
elisachen | null | null | null | false | 3 | false | elisachen/uber-trips | 2022-10-20T02:53:08.000Z | null | false | 248b7c37673309d07f896b0242300620331e3391 | [] | [
"license:bsd"
] | https://huggingface.co/datasets/elisachen/uber-trips/resolve/main/README.md | ---
license: bsd
---
|
relbert | null | @inproceedings{li-16,
title = {Commonsense Knowledge Base Completion},
author = {Xiang Li and Aynaz Taheri and Lifu Tu and Kevin Gimpel},
booktitle = {Proc. of ACL},
year = {2016}
}
@InProceedings{P16-1137,
author = "Li, Xiang
and Taheri, Aynaz
and Tu, Lifu
and Gimpel, Kevin",
title = "Commonsense Knowledge Base Completion",
booktitle = "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) ",
year = "2016",
publisher = "Association for Computational Linguistics",
pages = "1445--1455",
location = "Berlin, Germany",
doi = "10.18653/v1/P16-1137",
url = "http://aclweb.org/anthology/P16-1137"
} | [ConceptNet with high confidence](https://home.ttic.edu/~kgimpel/commonsense.html) | false | 6 | false | relbert/conceptnet_high_confidence_v2 | 2022-10-20T05:56:02.000Z | null | false | 6b506fe6e67f3d88db953537b343948a127f3c78 | [] | [
"language:en",
"license:other",
"multilinguality:monolingual",
"size_categories:1K<n<10K"
] | https://huggingface.co/datasets/relbert/conceptnet_high_confidence_v2/resolve/main/README.md | ---
language:
- en
license:
- other
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
pretty_name: ConceptNet with High Confidence
---
# Dataset Card for "relbert/conceptnet_high_confidence_v2"
## Dataset Description
- **Repository:** [RelBERT](https://github.com/asahi417/relbert)
- **Paper:** [https://home.ttic.edu/~kgimpel/commonsense.html](https://home.ttic.edu/~kgimpel/commonsense.html)
- **Dataset:** High Confidence Subset of ConceptNet
### Dataset Summary
***IMPORTANT***: This is the same dataset of [relbert/conceptnet_high_confidence](https://huggingface.co/datasets/relbert/conceptnet_high_confidence) but without relations of `NotCapableOf` and `NotDesires`.
The selected subset of ConceptNet used in [this work](https://home.ttic.edu/~kgimpel/commonsense.html), which compiled
to fine-tune [RelBERT](https://github.com/asahi417/relbert) model.
## Dataset Structure
### Data Instances
An example of `train` looks as follows.
```
{
"relation_type": "AtLocation",
"positives": [["fish", "water"], ["cloud", "sky"], ["child", "school"], ... ],
"negatives": [["pen", "write"], ["sex", "fun"], ["soccer", "sport"], ["fish", "school"], ... ]
}
```
### Data Splits
| name |train|validation|
|---------|----:|---------:|
|conceptnet_high_confidence| 25 | 24|
### Number of Positive/Negative Word-pairs in each Split
| relation_type | positive (train) | negative (train) | positive (validation) | negative (validation) |
|:-----------------|-------------------:|-------------------:|------------------------:|------------------------:|
| AtLocation | 383 | 1749 | 97 | 574 |
| CapableOf | 195 | 1771 | 73 | 596 |
| Causes | 71 | 1778 | 26 | 591 |
| CausesDesire | 9 | 1774 | 11 | 591 |
| CreatedBy | 2 | 1777 | 0 | 0 |
| DefinedAs | 0 | 0 | 2 | 591 |
| Desires | 16 | 1775 | 12 | 591 |
| HasA | 67 | 1795 | 17 | 591 |
| HasFirstSubevent | 2 | 1777 | 0 | 0 |
| HasLastSubevent | 2 | 1777 | 3 | 589 |
| HasPrerequisite | 168 | 1784 | 57 | 588 |
| HasProperty | 94 | 1782 | 39 | 601 |
| HasSubevent | 125 | 1779 | 40 | 605 |
| IsA | 310 | 1745 | 98 | 599 |
| MadeOf | 17 | 1774 | 7 | 589 |
| MotivatedByGoal | 14 | 1777 | 11 | 591 |
| PartOf | 34 | 1782 | 7 | 589 |
| ReceivesAction | 18 | 1774 | 8 | 589 |
| SymbolOf | 0 | 0 | 2 | 592 |
| UsedFor | 249 | 1796 | 81 | 584 |
| SUM | 1776 | 31966 | 591 | 10641 |
### Citation Information
```
@InProceedings{P16-1137,
author = "Li, Xiang
and Taheri, Aynaz
and Tu, Lifu
and Gimpel, Kevin",
title = "Commonsense Knowledge Base Completion",
booktitle = "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) ",
year = "2016",
publisher = "Association for Computational Linguistics",
pages = "1445--1455",
location = "Berlin, Germany",
doi = "10.18653/v1/P16-1137",
url = "http://aclweb.org/anthology/P16-1137"
}
``` |
pierro | null | null | null | false | null | false | pierro/sung | 2022-10-20T04:15:32.000Z | null | false | 41687c6f45baefd710e837e3ea9e8ca996f1fda0 | [] | [
"license:creativeml-openrail-m"
] | https://huggingface.co/datasets/pierro/sung/resolve/main/README.md | ---
license: creativeml-openrail-m
---
|
cjvt | null | @article{krek2017translation,
title={From translation equivalents to synonyms: creation of a Slovene thesaurus using word co-occurrence network analysis},
author={Krek, Simon and Laskowski, Cyprian and Robnik-{\v{S}}ikonja, Marko},
journal={Proceedings of eLex},
pages={93--109},
year={2017}
} | This is an automatically created Slovene thesaurus from Slovene data available in a comprehensive
English–Slovenian dictionary, a monolingual dictionary, and a corpus. A network analysis on the bilingual dictionary
word co-occurrence graph was used, together with additional information from the distributional thesaurus data
available as part of the Sketch Engine tool and extracted from the 1.2 billion word Gigafida corpus and the
monolingual dictionary. | false | 36 | false | cjvt/slo_thesaurus | 2022-10-20T12:23:03.000Z | null | false | fee643c48b14fb0a02a609a8162fa5aa704b7305 | [] | [
"annotations_creators:machine-generated",
"language:sl",
"language_creators:machine-generated",
"license:cc-by-sa-4.0",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"tags:sopomenke",
"tags:synonyms",
"task_categories:other"
] | https://huggingface.co/datasets/cjvt/slo_thesaurus/resolve/main/README.md | ---
annotations_creators:
- machine-generated
language:
- sl
language_creators:
- machine-generated
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
pretty_name: Thesaurus of Modern Slovene 1.0
size_categories:
- 100K<n<1M
source_datasets: []
tags:
- sopomenke
- synonyms
task_categories:
- other
task_ids: []
---
# Dataset Card for Thesaurus of Modern Slovene 1.0
Also known as "Sopomenke 1.0". Available in application form online: https://viri.cjvt.si/sopomenke/slv/.
### Dataset Summary
This is an automatically created Slovene thesaurus from Slovene data available in a comprehensive English–Slovenian dictionary, a monolingual dictionary, and a corpus. A network analysis on the bilingual dictionary word co-occurrence graph was used, together with additional information from the distributional thesaurus data available as part of the Sketch Engine tool and extracted from the 1.2 billion word Gigafida corpus and the monolingual dictionary.
For a detailed description of the data, please see the paper Krek et al. (2017).
### Supported Tasks and Leaderboards
Other (the data is a knowledge base).
### Languages
Slovenian.
## Dataset Structure
### Data Instances
Each entry is stored in its own instance. The following instance contains the metadata for the `headword` "abeceda" (EN: "alphabet").
```
{
'id_headword': 'th.12',
'headword': 'abeceda',
'groups_core': [],
'groups_near': [
{
'id_words': ['th.12.1', 'th.12.2'],
'words': ['pisava', 'črkopis'],
'scores': [0.3311710059642792, 0.3311710059642792],
'domains': [['jezikoslovje'], ['jezikoslovje']]
}
]
}
```
### Data Fields
- `id_headword`: a string ID of the word;
- `headword`: the word whose synonyms are grouped in the instance;
- `groups_core`: groups of likely synonyms - each group contains the IDs of the words (`id_words`), the synonyms (`words`), and how strong the synonym relation (`scores`) is. Some groups also have domains annotated (`domains`, >= 1 per word, i.e. `domains` is a list of lists);
- `groups_near`: same as `groups_near`, but the synonyms here are typically less likely to be exact synonyms and more likely to be otherwise similar.
## Additional Information
### Dataset Curators
Simon Krek; et al. (please see http://hdl.handle.net/11356/1166 for the full list).
### Licensing Information
CC BY-SA 4.0
### Citation Information
```
@article{krek2017translation,
title={From translation equivalents to synonyms: creation of a Slovene thesaurus using word co-occurrence network analysis},
author={Krek, Simon and Laskowski, Cyprian and Robnik-{\v{S}}ikonja, Marko},
journal={Proceedings of eLex},
pages={93--109},
year={2017}
}
```
### Contributions
Thanks to [@matejklemen](https://github.com/matejklemen) for adding this dataset.
|
myxzlpltk | null | null | null | false | null | false | myxzlpltk/times-new-roman-character | 2022-10-20T05:56:46.000Z | null | false | e7d4d4c5b87c9375171b57fac29b7d759b17aa49 | [] | [
"license:mit"
] | https://huggingface.co/datasets/myxzlpltk/times-new-roman-character/resolve/main/README.md | ---
license: mit
---
|
amanneo | null | null | null | false | 4 | false | amanneo/enron-mail-corpus-mini | 2022-10-20T13:08:21.000Z | null | false | 65a43c364766f0af2d314f6cce3bb1980a1913a4 | [] | [] | https://huggingface.co/datasets/amanneo/enron-mail-corpus-mini/resolve/main/README.md | ---
dataset_info:
features:
- name: text
dtype: string
- name: mail_length
dtype: int64
splits:
- name: test
num_bytes: 205837.52311697626
num_examples: 4000
- name: train
num_bytes: 1852537.7080527863
num_examples: 36000
download_size: 2332694
dataset_size: 2058375.2311697626
---
# Dataset Card for "enron-mail-corpus-mini"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
YannMinh | null | null | null | false | null | false | YannMinh/noomuseum | 2022-10-20T08:08:25.000Z | null | false | f1194cb09a831dafe07d38b163b6bca6481d9c88 | [] | [] | https://huggingface.co/datasets/YannMinh/noomuseum/resolve/main/README.md | |
davanstrien | null | null | null | false | 8 | false | davanstrien/loc_maps_sample_small | 2022-10-20T08:33:54.000Z | null | true | 7801146d99fbdd76f82c1cbad8163c215c0fcac6 | [] | [] | https://huggingface.co/datasets/davanstrien/loc_maps_sample_small/resolve/main/README.md | |
Andres12an | null | null | null | false | null | false | Andres12an/AT | 2022-10-20T09:48:34.000Z | null | false | c869b0a6d03598d217b700e60277e4b274c0e716 | [] | [
"license:c-uda"
] | https://huggingface.co/datasets/Andres12an/AT/resolve/main/README.md | ---
license: c-uda
---
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.