author stringlengths 2 29 ⌀ | cardData null | citation stringlengths 0 9.58k ⌀ | description stringlengths 0 5.93k ⌀ | disabled bool 1 class | downloads float64 1 1M ⌀ | gated bool 2 classes | id stringlengths 2 108 | lastModified stringlengths 24 24 | paperswithcode_id stringlengths 2 45 ⌀ | private bool 2 classes | sha stringlengths 40 40 | siblings list | tags list | readme_url stringlengths 57 163 | readme stringlengths 0 977k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_test_cot_v4-math-54ae93-2018366739 | 2022-11-07T20:37:13.000Z | null | false | b048e92848d7f9125b7c70cbafa2ec4c50b0864e | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:mathemakitten/winobias_antistereotype_test_cot_v4"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_test_cot_v4-math-54ae93-2018366739/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- mathemakitten/winobias_antistereotype_test_cot_v4
eval_info:
task: text_zero_shot_classification
model: inverse-scaling/opt-30b_eval
metrics: []
dataset_name: mathemakitten/winobias_antistereotype_test_cot_v4
dataset_config: mathemakitten--winobias_antistereotype_test_cot_v4
dataset_split: test
col_mapping:
text: text
classes: classes
target: target
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-30b_eval
* Dataset: mathemakitten/winobias_antistereotype_test_cot_v4
* Config: mathemakitten--winobias_antistereotype_test_cot_v4
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model. |
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_test_cot_v4-math-54ae93-2018366740 | 2022-11-07T19:47:10.000Z | null | false | 480460c2c7aee0e610f719a6018cf6d78fbb0701 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:mathemakitten/winobias_antistereotype_test_cot_v4"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_test_cot_v4-math-54ae93-2018366740/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- mathemakitten/winobias_antistereotype_test_cot_v4
eval_info:
task: text_zero_shot_classification
model: inverse-scaling/opt-2.7b_eval
metrics: []
dataset_name: mathemakitten/winobias_antistereotype_test_cot_v4
dataset_config: mathemakitten--winobias_antistereotype_test_cot_v4
dataset_split: test
col_mapping:
text: text
classes: classes
target: target
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-2.7b_eval
* Dataset: mathemakitten/winobias_antistereotype_test_cot_v4
* Config: mathemakitten--winobias_antistereotype_test_cot_v4
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model. |
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_test_cot_v4-math-54ae93-2018366737 | 2022-11-07T19:45:39.000Z | null | false | fadefe3f12997cab6f12c63824d313a0a76c889d | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:mathemakitten/winobias_antistereotype_test_cot_v4"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_test_cot_v4-math-54ae93-2018366737/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- mathemakitten/winobias_antistereotype_test_cot_v4
eval_info:
task: text_zero_shot_classification
model: inverse-scaling/opt-350m_eval
metrics: []
dataset_name: mathemakitten/winobias_antistereotype_test_cot_v4
dataset_config: mathemakitten--winobias_antistereotype_test_cot_v4
dataset_split: test
col_mapping:
text: text
classes: classes
target: target
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-350m_eval
* Dataset: mathemakitten/winobias_antistereotype_test_cot_v4
* Config: mathemakitten--winobias_antistereotype_test_cot_v4
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model. |
At0x | null | null | null | false | 3 | false | At0x/AIUniverse | 2022-11-09T22:39:52.000Z | null | false | 026e6d42bde2c22ccd1d5bb55c47fd57e5bb5b13 | [] | [
"license:creativeml-openrail-m"
] | https://huggingface.co/datasets/At0x/AIUniverse/resolve/main/README.md | ---
license: creativeml-openrail-m
---
|
AlliumPlayzDeluxo | null | null | null | false | 2 | false | AlliumPlayzDeluxo/wikiplussearch | 2022-11-07T23:20:34.000Z | null | false | 8861727d8a9fcc7e1b9b997b3f160d40bba36e57 | [] | [
"license:apache-2.0"
] | https://huggingface.co/datasets/AlliumPlayzDeluxo/wikiplussearch/resolve/main/README.md | ---
license: apache-2.0
---
|
Duno9 | null | null | null | false | null | false | Duno9/text_inversion_toril | 2022-11-08T00:41:10.000Z | null | false | df9ef59e5a8b02a5f4ec2c2b7bce07c0bafa921a | [] | [
"license:openrail"
] | https://huggingface.co/datasets/Duno9/text_inversion_toril/resolve/main/README.md | ---
license: openrail
---
|
mac326 | null | null | null | false | 1 | false | mac326/test | 2022-11-08T00:49:45.000Z | null | false | da826ecc7f0db408c41cb45766be606c44b3aed1 | [] | [
"license:openrail"
] | https://huggingface.co/datasets/mac326/test/resolve/main/README.md | ---
license: openrail
---
|
jianguo | null | null | null | false | null | false | jianguo/jianguo-1234 | 2022-11-08T03:12:13.000Z | null | false | ef0b6be47597c2ac7d3c116b1dffb405fbbda591 | [] | [
"license:openrail"
] | https://huggingface.co/datasets/jianguo/jianguo-1234/resolve/main/README.md | ---
license: openrail
---
|
GlobalVisualMemory | null | null | null | false | null | false | GlobalVisualMemory/SuperVisualActions | 2022-11-09T19:30:13.000Z | null | false | 6989988aa10d8e648f3beae9245cbb759ae2cc9d | [] | [] | https://huggingface.co/datasets/GlobalVisualMemory/SuperVisualActions/resolve/main/README.md | # SuperVisual Actions
SuperVisual actions dataset is crowdsourced using tab recording feature in Chrome & Edge browsers.
Each .supervisual file is a zip archive containing a SuperVisual session. The session demonstrates action being completed corresponding to prompt in prompts.csv.
Each SuperVisual session contains
- Audio video blobs in MP4 inside a WebM container
- Mouse clicks and Keypress actions along with metadata
- Highlight image / screenshot of the contents along with OCR text and metadata
More information about data collected, schema is available at https://www.supervisual.app
---
license: cc-by-4.0
|
Tristan | null | null | null | false | 233 | false | Tristan/olm-october-2022-tokenized | 2022-11-08T07:58:59.000Z | null | false | 25e7626c126613c2898bd29f8cb101e410fee989 | [] | [] | https://huggingface.co/datasets/Tristan/olm-october-2022-tokenized/resolve/main/README.md | ---
dataset_info:
features:
- name: input_ids
sequence: int32
- name: token_type_ids
sequence: int8
- name: attention_mask
sequence: int8
- name: special_tokens_mask
sequence: int8
splits:
- name: train
num_bytes: 84051313200.0
num_examples: 23347587
download_size: 21176572924
dataset_size: 84051313200.0
---
# Dataset Card for "olm-october-2022-tokenized-olm-bert-base-uncased"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-futin__random-en-805a17-2021966768 | 2022-11-08T07:38:50.000Z | null | false | a3e6a10b65441edae7f8f1de9f20eec218082d20 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:futin/random"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-futin__random-en-805a17-2021966768/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- futin/random
eval_info:
task: text_zero_shot_classification
model: facebook/opt-6.7b
metrics: []
dataset_name: futin/random
dataset_config: en
dataset_split: test
col_mapping:
text: text
classes: classes
target: target
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-6.7b
* Dataset: futin/random
* Config: en
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@futin](https://huggingface.co/futin) for evaluating this model. |
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-futin__random-en-805a17-2021966769 | 2022-11-08T05:54:50.000Z | null | false | 42fda3c0d1ef504e2c100f16288a4da9e7a082b8 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:futin/random"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-futin__random-en-805a17-2021966769/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- futin/random
eval_info:
task: text_zero_shot_classification
model: facebook/opt-2.7b
metrics: []
dataset_name: futin/random
dataset_config: en
dataset_split: test
col_mapping:
text: text
classes: classes
target: target
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-2.7b
* Dataset: futin/random
* Config: en
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@futin](https://huggingface.co/futin) for evaluating this model. |
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-futin__random-en-805a17-2021966770 | 2022-11-08T05:39:34.000Z | null | false | 98684aeb6f743727a96594d3fe2d5f5c0a3fc0c1 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:futin/random"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-futin__random-en-805a17-2021966770/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- futin/random
eval_info:
task: text_zero_shot_classification
model: facebook/opt-1.3b
metrics: []
dataset_name: futin/random
dataset_config: en
dataset_split: test
col_mapping:
text: text
classes: classes
target: target
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-1.3b
* Dataset: futin/random
* Config: en
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@futin](https://huggingface.co/futin) for evaluating this model. |
lasha-nlp | null | null | null | false | 1 | false | lasha-nlp/CONDAQA | 2022-11-08T07:04:12.000Z | null | false | 3c9caa2f2f6960711e7f4d2e800581def2b6c183 | [] | [
"arxiv:2211.00295",
"annotations_creators:crowdsourced",
"language:en",
"language_creators:found",
"language_creators:crowdsourced",
"license:apache-2.0",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"tags:negation",
"tags:reading comprehension",
"task_categories:question-answering"
] | https://huggingface.co/datasets/lasha-nlp/CONDAQA/resolve/main/README.md |
---
annotations_creators:
- crowdsourced
language:
- en
language_creators:
- found
- crowdsourced
license:
- apache-2.0
multilinguality:
- monolingual
pretty_name: condaqa
size_categories:
- 10K<n<100K
source_datasets:
- original
tags:
- negation
- reading comprehension
task_categories:
- question-answering
task_ids: []
---
# Dataset Card for CondaQA: A Contrastive Reading Comprehension Dataset for Reasoning about Negation
## Dataset Description
- **Repository:** [https://github.com/AbhilashaRavichander/CondaQA](https://github.com/AbhilashaRavichander/CondaQA)
- **Paper:** [https://arxiv.org/abs/2211.00295](https://arxiv.org/abs/2211.00295)
- **Point of Contact:** aravicha@andrew.cmu.edu
## Dataset Summary
Data from the EMNLP 2022 paper by Ravichander et al.: "CondaQA: A Contrastive Reading Comprehension Dataset for Reasoning about Negation".
If you use this dataset, we would appreciate you citing our work:
```
@inproceedings{ravichander-et-al-2022-condaqa,
title={CONDAQA: A Contrastive Reading Comprehension Dataset for Reasoning about Negation},
author={Ravichander, Abhilasha and Gardner, Matt and Marasovi\'{c}, Ana},
proceedings={EMNLP 2022},
year={2022}
}
```
From the paper: "We introduce CondaQA to facilitate the future development of models that can process negation effectively. This is the first English reading comprehension dataset which requires reasoning about the implications of negated statements in paragraphs. We collect paragraphs with diverse negation cues, then have crowdworkers ask questions about the _implications_ of the negated statement in the passage. We also have workers make three kinds of edits to the passage---paraphrasing the negated statement, changing the scope of the negation, and reversing the negation---resulting in clusters of question-answer pairs that are difficult for models to answer with spurious shortcuts. CondaQA features 14,182 question-answer pairs with over 200 unique negation cues."
### Supported Tasks and Leaderboards
The task is to answer a question given a Wikipedia passage that includes something being negated. There is no official leaderboard.
### Language
English
## Dataset Structure
### Data Instances
Here's an example instance:
```
{"QuestionID": "q10",
"original cue": "rarely",
"PassageEditID": 0,
"original passage": "Drug possession is the crime of having one or more illegal drugs in one's possession, either for personal use, distribution, sale or otherwise. Illegal drugs fall into different categories and sentences vary depending on the amount, type of drug, circumstances, and jurisdiction. In the U.S., the penalty for illegal drug possession and sale can vary from a small fine to a prison sentence. In some states, marijuana possession is considered to be a petty offense, with the penalty being comparable to that of a speeding violation. In some municipalities, possessing a small quantity of marijuana in one's own home is not punishable at all. Generally, however, drug possession is an arrestable offense, although first-time offenders rarely serve jail time. Federal law makes even possession of \"soft drugs\", such as cannabis, illegal, though some local governments have laws contradicting federal laws.",
"SampleID": 5294,
"label": "YES",
"original sentence": "Generally, however, drug possession is an arrestable offense, although first-time offenders rarely serve jail time.",
"sentence2": "If a drug addict is caught with marijuana, is there a chance he will be jailed?",
"PassageID": 444,
"sentence1": "Drug possession is the crime of having one or more illegal drugs in one's possession, either for personal use, distribution, sale or otherwise. Illegal drugs fall into different categories and sentences vary depending on the amount, type of drug, circumstances, and jurisdiction. In the U.S., the penalty for illegal drug possession and sale can vary from a small fine to a prison sentence. In some states, marijuana possession is considered to be a petty offense, with the penalty being comparable to that of a speeding violation. In some municipalities, possessing a small quantity of marijuana in one's own home is not punishable at all. Generally, however, drug possession is an arrestable offense, although first-time offenders rarely serve jail time. Federal law makes even possession of \"soft drugs\", such as cannabis, illegal, though some local governments have laws contradicting federal laws."
}
```
### Data Fields
* `QuestionID`: unique ID for this question (might be asked for multiple passages)
* `original cue`: Negation cue that was used to select this passage from Wikipedia
* `PassageEditID`: 0 = original passage, 1 = paraphrase-edit passage, 2 = scope-edit passage, 3 = affirmative-edit passage
* `original passage`: Original Wikipedia passage the passage is based on (note that the passage might either be the original Wikipedia passage itself, or an edit based on it)
* `SampleID`: unique ID for this passage-question pair
* `label`: answer
* `original sentence`: Sentence that contains the negated statement
* `sentence2`: question
* `PassageID`: unique ID for the Wikipedia passage
* `sentence1`: passage
### Data Splits
Data splits can be accessed as:
```
from datasets import load_dataset
train_set = load_dataset("condaqa", "train")
dev_set = load_dataset("condaqa", "dev")
test_set = load_dataset("condaqa", "test")
```
## Dataset Creation
Full details are in the paper.
### Curation Rationale
From the paper: "Our goal is to evaluate models on their ability to process the contextual implications of negation. We have the following desiderata for our question-answering dataset:
1. The dataset should include a wide variety of negation cues, not just negative particles.
2. Questions should be targeted towards the _implications_ of a negated statement, rather than the factual content of what was or wasn't negated, to remove common sources of spurious cues in QA datasets (Kaushik and Lipton, 2018; Naik et al., 2018; McCoy et al., 2019).
3. Questions should come in closely-related, contrastive groups, to further reduce the possibility of models' reliance on spurious cues in the data (Gardner et al., 2020). This will result in sets of passages that are similar to each other in terms of the words that they contain, but that may admit different answers to questions.
4. Questions should probe the extent to which models are sensitive to how the negation is expressed. In order to do this, there should be contrasting passages that differ only in their negation cue or its scope."
### Source Data
From the paper: "To construct CondaQA, we first collected passages from a July 2021 version of English Wikipedia that contained negation cues, including single- and multi-word negation phrases, as well as affixal negation."
"We use negation cues from [Morante et al. (2011)](https://aclanthology.org/L12-1077/) and [van Son et al. (2016)](https://aclanthology.org/W16-5007/) as a starting point which we extend."
#### Initial Data Collection and Normalization
We show ten passages to crowdworkers and allow them to choose a passage they would like to work on.
#### Who are the source language producers?
Original passages come from volunteers who contribute to Wikipedia. Passage edits, questions, and answers are produced by crowdworkers.
### Annotations
#### Annotation process
From the paper: "In the first stage of the task, crowdworkers made three types of modifications to the original passage: (1) they paraphrased the negated statement, (2) they modified the scope of the negated statement (while retaining the negation cue), and (3) they undid the negation. In the second stage, we instruct crowdworkers to ask challenging questions about the implications of the negated statement. The crowdworkers then answered the questions they wrote previously for the original and edited passages."
Full details are in the paper.
#### Who are the annotators?
From the paper: "Candidates took a qualification exam which consisted of 12 multiple-choice questions that evaluated comprehension of the instructions. We recruit crowdworkers who answer >70% of the questions correctly for the next stage of the dataset construction task." We use the CrowdAQ platform for the exam and Amazon Mechanical Turk for annotations.
### Personal and Sensitive Information
We expect that such information has already been redacted from Wikipedia.
## Considerations for Using the Data
### Social Impact of Dataset
A model that solves this dataset might be (mis-)represented as an evidence that the model understands the entirety of English language and consequently deployed where it will have immediate and/or downstream impact on stakeholders.
### Discussion of Biases
We are not aware of societal biases that are exhibited in this dataset.
### Other Known Limitations
From the paper: "Though CondaQA currently represents the largest NLU dataset that evaluates a model’s ability to process the implications of negation statements, it is possible to construct a larger dataset, with more examples spanning different answer types. Further CONDAQA is an English dataset, and it would be useful to extend our data collection procedures to build high-quality resources in other languages. Finally, while we attempt to extensively measure and control for artifacts in our dataset, it is possible that our dataset has hidden artifacts that we did not study."
## Additional Information
### Dataset Curators
From the paper: "In order to estimate human performance, and to construct a high-quality evaluation with fewer ambiguous examples, we have five verifiers provide answers for each question in the development and test sets." The first author has been manually checking the annotations throughout the entire data collection process that took ~7 months.
### Licensing Information
license: apache-2.0
### Citation Information
```
@inproceedings{ravichander-et-al-2022-condaqa,
title={CONDAQA: A Contrastive Reading Comprehension Dataset for Reasoning about Negation},
author={Ravichander, Abhilasha and Gardner, Matt and Marasovi\'{c}, Ana},
proceedings={EMNLP 2022},
year={2022}
}
``` |
pixta-ai | null | null | null | false | null | false | pixta-ai/mixed-race-human-emotion | 2022-11-08T07:38:03.000Z | null | false | fd09e317ea7147373a6fbd3cede5cc02d7854a98 | [] | [] | https://huggingface.co/datasets/pixta-ai/mixed-race-human-emotion/resolve/main/README.md | # 1. Overview
This dataset is a collection of 6,000+ images of mixed race human face with various expressions & emotions that are ready to use for optimizing the accuracy of computer vision models. All of the contents is sourced from PIXTA's stock library of 100M+ Asian-featured images and videos. PIXTA is the largest platform of visual materials in the Asia Pacific region offering fully-managed services, high quality contents and data, and powerful tools for businesses & organisations to enable their creative and machine learning projects.
# 2. The data set
This dataset contains 6,000+ images of face emotion. Each data set is supported by both AI and human review process to ensure labelling consistency and accuracy. Contact us for more custom datasets.
# 3. About PIXTA
PIXTASTOCK is the largest Asian-featured stock platform providing data, contents, tools and services since 2005. PIXTA experiences 15 years of integrating advanced AI technology in managing, curating, processing over 100M visual materials and serving global leading brands for their creative and data demands. Visit us at https://www.pixta.ai/ or contact via our email contact@pixta.ai." |
iKonaN | null | null | null | false | null | false | iKonaN/ley | 2022-11-08T08:16:12.000Z | null | false | a25191b4a0575327e61f541374b9afe45387f772 | [] | [
"license:afl-3.0"
] | https://huggingface.co/datasets/iKonaN/ley/resolve/main/README.md | ---
license: afl-3.0
---
|
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-futin__random-en-30c46b-2023566786 | 2022-11-08T12:21:56.000Z | null | false | 92b053991b1742eaa198212617eed2abd572e0f3 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:futin/random"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-futin__random-en-30c46b-2023566786/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- futin/random
eval_info:
task: text_zero_shot_classification
model: facebook/opt-13b
metrics: []
dataset_name: futin/random
dataset_config: en
dataset_split: test
col_mapping:
text: text
classes: classes
target: target
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-13b
* Dataset: futin/random
* Config: en
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@futin](https://huggingface.co/futin) for evaluating this model. |
amir7d0 | null | null | null | false | 18 | false | amir7d0/laion2B-fa-images | 2022-11-09T16:36:43.000Z | null | false | b9cd95a557cc71a144179dfbc97b9603382e1cfa | [] | [] | https://huggingface.co/datasets/amir7d0/laion2B-fa-images/resolve/main/README.md | ---
dataset_info:
features:
- name: SAMPLE_ID
dtype: int64
- name: TEXT
dtype: string
- name: URL
dtype: string
- name: IMAGE_PATH
dtype: string
- name: IMAGE
dtype: image
splits:
- name: train
num_bytes: 21488547.0
num_examples: 1000
download_size: 21283656
dataset_size: 21488547.0
---
# Dataset Card for "laion2B-fa-images"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
eminecg | null | null | null | false | 9 | false | eminecg/petitions-ds | 2022-11-08T09:28:57.000Z | null | false | 45fb5843a8fc3fde3028a623d7afb8d3e8f42007 | [] | [] | https://huggingface.co/datasets/eminecg/petitions-ds/resolve/main/README.md | ---
dataset_info:
features:
- name: petition
dtype: string
- name: petition_length
dtype: int64
splits:
- name: train
num_bytes: 29426840.1
num_examples: 2475
- name: validation
num_bytes: 3269648.9
num_examples: 275
download_size: 14382239
dataset_size: 32696489.0
---
# Dataset Card for "petitions-ds"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
polinaeterna | null | null | null | false | null | false | polinaeterna/test_push3 | 2022-11-08T09:21:09.000Z | null | false | 546126dd7206964952182cc541052f1649e78525 | [] | [] | https://huggingface.co/datasets/polinaeterna/test_push3/resolve/main/README.md | ---
dataset_info:
features:
- name: x
dtype: int64
- name: y
dtype: string
splits:
- name: test
num_bytes: 46
num_examples: 3
- name: train
num_bytes: 116
num_examples: 8
download_size: 1698
dataset_size: 162
---
# Dataset Card for "test_push3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Nma | null | null | null | false | 2 | false | Nma/resume_dataset | 2022-11-08T09:25:22.000Z | null | false | 2bebc3c89a3f327680c2f6ae9d62b1e86fb6b6b6 | [] | [] | https://huggingface.co/datasets/Nma/resume_dataset/resolve/main/README.md | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: test
num_bytes: 355695532
num_examples: 161071
- name: train
num_bytes: 1421896716
num_examples: 644282
download_size: 896434509
dataset_size: 1777592248
---
# Dataset Card for "resume_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
amir7d0 | null | null | null | false | 39 | false | amir7d0/tmp | 2022-11-09T13:28:01.000Z | null | false | a0aedcc2333fb5e70217bf070e0ae193c2254897 | [] | [] | https://huggingface.co/datasets/amir7d0/tmp/resolve/main/README.md | ---
dataset_info:
features:
- name: SAMPLE_ID
dtype: int64
- name: TEXT
dtype: string
- name: URL
dtype: string
- name: IMAGE_PATH
dtype: string
- name: IMAGE
dtype: image
splits:
- name: train
num_bytes: 599579428.0
num_examples: 100000
download_size: 2124724355
dataset_size: 599579428.0
---
# Dataset Card for "tmp"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
polinaeterna | null | null | null | false | 28 | false | polinaeterna/test_push4 | 2022-11-08T09:47:55.000Z | null | false | c3b175a8dfdcaaf7ad64a1f0ba2939f4266948bb | [] | [] | https://huggingface.co/datasets/polinaeterna/test_push4/resolve/main/README.md | ---
dataset_info:
- config_name: v1
features:
- name: x
dtype: int64
- name: y
dtype: string
splits:
- name: train
- name: test
- config_name: v2
features:
- name: x
dtype: int64
- name: y
dtype: string
splits:
- name: train
- name: test
---
# Dataset Card for "test_push4"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
polinaeterna | null | null | null | false | 2 | false | polinaeterna/test_push_no_conf | 2022-11-08T09:54:13.000Z | null | false | c99d6d2f4a02dacd94f6ffd3055db5472613750e | [] | [] | https://huggingface.co/datasets/polinaeterna/test_push_no_conf/resolve/main/README.md | ---
dataset_info:
features:
- name: x
dtype: int64
- name: y
dtype: string
splits:
- name: train
num_bytes: 120
num_examples: 8
- name: test
num_bytes: 46
num_examples: 3
download_size: 1712
dataset_size: 166
---
# Dataset Card for "test_push_no_conf"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Nma | null | null | null | false | null | false | Nma/tokenize_resume_dataset | 2022-11-08T09:56:21.000Z | null | false | f0471f90290414cceb9e69cc3c16ffff338c4e9d | [] | [] | https://huggingface.co/datasets/Nma/tokenize_resume_dataset/resolve/main/README.md | ---
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: test
num_bytes: 275640050
num_examples: 161071
- name: train
num_bytes: 1102620205
num_examples: 644282
download_size: 521528169
dataset_size: 1378260255
---
# Dataset Card for "tokenize_resume_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Nma | null | null | null | false | 66 | false | Nma/lm_resume_dataset | 2022-11-08T10:23:33.000Z | null | false | c6abcf44778df8dbf38ba6599b19ed196ea6e5ae | [] | [] | https://huggingface.co/datasets/Nma/lm_resume_dataset/resolve/main/README.md | ---
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
splits:
- name: test
num_bytes: 714031412
num_examples: 107083
- name: train
num_bytes: 2856345596
num_examples: 428365
download_size: 1035174948
dataset_size: 3570377008
---
# Dataset Card for "lm_resume_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
superchthonic | null | null | null | false | null | false | superchthonic/logos-dataset | 2022-11-08T10:42:10.000Z | null | false | 8616749880709e4f10ab40bcad2fc62e33caed34 | [] | [] | https://huggingface.co/datasets/superchthonic/logos-dataset/resolve/main/README.md | All images taken from https://github.com/InputBlackBoxOutput/logo-images-dataset |
polinaeterna | null | null | null | false | null | false | polinaeterna/test_push_two_confs | 2022-11-08T11:40:48.000Z | null | false | 1fa6a3831dae1addb2e2f712bbf13edcd94b274a | [] | [] | https://huggingface.co/datasets/polinaeterna/test_push_two_confs/resolve/main/README.md | ---
dataset_info:
features:
- name: x
dtype: int64
- name: y
dtype: string
splits:
- name: train
num_bytes: 120
num_examples: 8
- name: test
num_bytes: 46
num_examples: 3
download_size: 1712
dataset_size: 166
---
# Dataset Card for "test_push_two_confs"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ibm | null | null | null | false | null | false | ibm/vira-intents-live | 2022-11-08T12:34:40.000Z | null | false | f97c40ddb39bdf364fde4c7970b7ba5a16d2470a | [] | [] | https://huggingface.co/datasets/ibm/vira-intents-live/resolve/main/README.md | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype: int64
splits:
- name: test
num_bytes: 227106
num_examples: 3140
- name: train
num_bytes: 536982
num_examples: 7434
download_size: 341066
dataset_size: 764088
---
# Dataset Card for "vira-intents-live"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Ayush2609 | null | null | null | false | 1 | false | Ayush2609/AJ_sentence | 2022-11-08T14:58:24.000Z | null | false | 667f41421b215542d57fb403481f6dab10c0759f | [] | [] | https://huggingface.co/datasets/Ayush2609/AJ_sentence/resolve/main/README.md | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 249843.62830074583
num_examples: 4464
- name: validation
num_bytes: 27816.37169925418
num_examples: 497
download_size: 179173
dataset_size: 277660.0
---
# Dataset Card for "AJ_sentence"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
mboth | null | null | null | false | 5 | false | mboth/clustering_datenpunkte | 2022-11-08T13:50:41.000Z | null | false | 939ebf1db9a2ef397c6e96808573b439bd0323fe | [] | [] | https://huggingface.co/datasets/mboth/clustering_datenpunkte/resolve/main/README.md | ---
dataset_info:
features:
- name: index
dtype: int64
- name: text
dtype: string
- name: Beschreibung
dtype: string
- name: Name
dtype: string
- name: label
dtype:
class_label:
names:
0: Aktivierung Raumoptimierung
1: Aktuelle Leistung
2: Alarm Frostschutz
3: Alarme Zurück Gestellt
4: Alarmmeldung
5: Alarmmeldung Frostschutz
6: Anforderung
7: Anforderung Tableau
8: Anhebung Vorlauftemperatur
9: Anzahl Schaltungen
10: Befehlausführungskontrolle
11: Befehlsausführkontrolle
12: Betriebsmeldung Präsenzmelder
13: Betriebsmeldung Start
14: Betriebsstunden
15: BetriebsstundenPumpe
16: Doppelpumpen
17: ExterneVorrangschaltungAktiv
18: Freigabe
19: Freigabe Heizung
20: Freigabe Raumkorrektur
21: Freigabe Stellantrieb
22: Freigabe Stützbetrieb
23: Freigabe Stützbetrieb Nacht Ventil
24: Freigabe Stützbetrieb Tag Ventil
25: Freigabe Zeitprogramm
26: Grenzwert Frost
27: Grenzwert Rücklauftemperatur
28: Grenzwert Rücklauftemperatur Sekundär
29: Grenzwert Vorlauftemperatur Sekundär
30: Heizkurve
31: Laufzeit 3 Punkt Antrieb
32: Laufzeit Nächste Wartung
33: Laufzeit Ventil
34: ManagementEbene
35: Messwert Abgastemperatur
36: Messwert Außentemperatur
37: Messwert CO2
38: Messwert Differenzdruck
39: Messwert Drehzahl
40: Messwert Druck
41: Messwert Durchfluss
42: Messwert Energieverbrauch
43: Messwert Feuchte
44: Messwert Gasverbrauch
45: Messwert Leistungsaufnahme
46: Messwert Luftqualität
47: Messwert Primärluft
48: Messwert Raumtemperatur
49: Messwert Rücklauftemperatur
50: Messwert Rücklauftemperatur Primär
51: Messwert Rücklauftemperatur Sekundär
52: Messwert Spannung
53: Messwert Speichertemperatur Oben
54: Messwert Strom
55: Messwert Stromaufnahme
56: Messwert Temperatur
57: Messwert Temperatur Austritt Zuluft
58: Messwert Temperatur Einschubrohr
59: Messwert Temperatur Eintritt Abluft
60: Messwert Temperatur Eintritt Zuluft
61: Messwert Temperatur Generator
62: Messwert Volumenstrom
63: Messwert Vorlauftemperatur
64: Messwert Vorlauftemperatur Primär
65: Messwert Vorlauftemperatur Sekundär
66: MesswertSpeichertemperatur
67: MesswertSpeichertemperaturMitte
68: MesswertSpeichertemperaturUnten
69: Offset Vorlauftemperatur
70: Pumpe
71: Pumpenwechsel
72: Regler
73: Reset Betriebsstunden
74: Restsauerstoff
75: Rohrheizung
76: Rueckmeldung Blockierschutz
77: Rücklauftemperatur
78: Rückmeldung Absenkbetrieb
79: Rückmeldung Anfahrbetrieb
80: Rückmeldung Anlage Fern
81: Rückmeldung Aufheizbetrieb
82: Rückmeldung Batterie
83: Rückmeldung Betrieb
84: Rückmeldung Betriebsart
85: Rückmeldung Blockierschutz Brunnenpumpe
86: Rückmeldung Blockierschutz Umwälzpumpe
87: Rückmeldung Drehzahl
88: Rückmeldung Ferienprogramm
89: Rückmeldung Freie Nachtkühlung
90: Rückmeldung Frostschutz
91: Rückmeldung Gedämpfte Außentemperatur
92: Rückmeldung Grenzwert Soll Ist Abweichung Temperatur
93: Rückmeldung Handschaltung
94: Rückmeldung Handschaltung Brunnenpumpe
95: Rückmeldung Handschaltung Fernwärme
96: Rückmeldung Handschaltung Pumpe
97: Rückmeldung Handschaltung Ventil
98: Rückmeldung Handschaltung Wärmepumpe
99: Rückmeldung Klappe
100: Rückmeldung Klappe Auf
101: Rückmeldung Klappe Offen
102: Rückmeldung Klappe Zu
103: Rückmeldung Kommunikation
104: Rückmeldung Laufüberwachung
105: Rückmeldung Leistung
106: Rückmeldung Nachtbetrieb
107: Rückmeldung Normalbetrieb
108: Rückmeldung Not Aus
109: Rückmeldung Nutzzeitverlängerung
110: Rückmeldung Quittierung
111: Rückmeldung Regelabweichung
112: Rückmeldung Reperaturschalter
113: Rückmeldung Restlaufzeit Nutzzeitverlängerung
114: Rückmeldung Schnecke Leer
115: Rückmeldung Sollwertabweichung Vorlauftemperatur
116: Rückmeldung Spülen
117: Rückmeldung Stellsignal
118: Rückmeldung Stellsignal Ventil
119: Rückmeldung Tagbetrieb
120: Rückmeldung Umschaltventil Zu
121: Rückmeldung Ventil
122: Rückmeldung Ventil Handschaltung
123: Rückmeldung Ventil Rücklauf
124: Rückmeldung Wärmebedarf Heizung
125: Rückmeldung Zeitplan
126: Rückmeldung betrieb
127: Rückmeldung Ölnachspeisung Aktiv
128: RückmeldungHandschaltungKlappe
129: RückmeldungHandschaltungVentil
130: Schalftbefehl Anlage Fern
131: Schaltbefehl
132: Schaltbefehl Anlage
133: Schaltbefehl Blockierschutz
134: Schaltbefehl Frostschutz
135: Schaltbefehl Gleitendes Schalten
136: Schaltbefehl Klappe
137: Schaltbefehl Nachtabsenkung
138: Schaltbefehl Nachtkühlung
139: Schaltbefehl Not Aus
140: Schaltbefehl Nutzzeitverlängerung
141: Schaltbefehl Optimierte Luftqualität
142: Schaltbefehl Pumpe
143: Schaltbefehl Raumkorrektur
144: Schaltbefehl Start Stop Optimierung
145: Schaltbefehl Tagesprogramm
146: Schaltbefehl Zeitprogramm
147: Sollwert Abschalten Stützbetrieb
148: Sollwert Abschaltung
149: Sollwert Aufheizzeit
150: Sollwert Ausschaltverzögerung
151: Sollwert Außentemperatur
152: Sollwert Befeuchten
153: Sollwert CO2
154: Sollwert CO2 Konzentration
155: Sollwert CO2 Konzentration Max
156: Sollwert CO2 Max
157: Sollwert Dauerfreigabe
158: Sollwert Druck
159: Sollwert Einschaltverzögerung
160: Sollwert FU
161: Sollwert Feuchte
162: Sollwert Feuchte Min
163: Sollwert Freie Nachtkühlung
164: Sollwert Frostschutz
165: Sollwert Grenzwert Soll Ist Abweichung Temperatur
166: Sollwert Kühlbedarf
167: Sollwert Laufzeit
168: Sollwert Laufzeit Blockierschutz
169: Sollwert Leistung
170: Sollwert Maximale Aufheizzeit
171: Sollwert Maximale Einschaltverzögerung
172: Sollwert Maximale Rücklauftemperatur
173: Sollwert Maximale Vorlauftemperatur
174: Sollwert Minimale Außentemperatur
175: Sollwert Minimale Raumtemperatur
176: Sollwert Minimale Vorlauftemperatur
177: Sollwert Mischventil
178: Sollwert Nachlaufzeit
179: Sollwert Nacht
180: Sollwert Nachtabsenkung
181: Sollwert Nachtabsenkung Vorlauftemperatur
182: Sollwert Nutzzeitverlängerung
183: Sollwert Raumkorrektur
184: Sollwert Raumtemperatur
185: Sollwert Raumtemperatur Nacht
186: Sollwert Raumtemperatur Tag
187: Sollwert Reset Betriebsstunden
188: Sollwert Rücklauftemperatur
189: Sollwert Speicherfähigkeit
190: Sollwert Speichertemperatur Unten
191: Sollwert Spülzeit
192: Sollwert Stellsignal
193: Sollwert Stellsignal Max
194: Sollwert Stellsignal Min
195: Sollwert Stützbetrieb Nacht
196: Sollwert Stützbetrieb Tag
197: Sollwert Tag
198: Sollwert Temperatur
199: Sollwert Temperatur Max
200: Sollwert Temperatur Min
201: Sollwert Volumenstrom
202: Sollwert Volumenstrom Max
203: Sollwert Volumenstrom Min
204: Sollwert Vorlauftemperatur
205: Sollwert Wartezeit
206: Sollwert Wärmebedarf
207: Sollwert Überhöhung Hydraulische Weiche
208: SollwertAußentemperaturMaximalTag
209: SollwertMaximaleHysteresSpeichertemperatur
210: SollwertNachlaufzeitPumpe
211: SollwertSpeichertemperatur
212: Sollwertkorrektur Vorlauftemperatur
213: Sollwertverschiebung
214: Status Übersteuern Ein
215: Stellbefehl
216: Stellbefehl Anlage
217: Stellbefehl Max
218: Stellbefehl Min
219: Stellbefehl Ventil
220: Stellbefehl WRG Bypass
221: Störmeldung
222: Stützbetrieb Nacht Erreicht
223: Warmwasserbereitung
224: Warnemldung Temperatur Niedrig
225: Warnmeldung
226: Warnmeldung CO2 Hoch
227: Warnmeldung Feuchte
228: Warnmeldung Temperatur Hoch
229: Wartungsintervall
230: Wartungsmeldung
231: Wartungsmeldung Abluft
232: Wartungsmeldung Außenluft
233: Wartungsmeldung Filter
234: Wartungsmeldung Zuluft
235: Wärmemengenzähler
236: Zähler
237: Zähler Volumenstrom Förderbrunnen
238: Zählwert Kältemenge
239: Zählwert Kühlwasser
240: Überhöhung Kesselanlage
- name: Komponente
dtype: string
- name: Grundfunktion
dtype: string
- name: ZweiteGrundfunktion
dtype: string
- name: hypothesis
dtype: string
- name: label_not_encoded
dtype: string
splits:
- name: train
num_bytes: 1603197
num_examples: 4957
download_size: 324603
dataset_size: 1603197
---
# Dataset Card for "clustering_datenpunkte"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
graphs-datasets | null | null | null | false | 118 | false | graphs-datasets/artificial-unbalanced-500K | 2022-11-08T14:16:21.000Z | null | false | f41edc00905904578c4be9dd48c81da5b159ea05 | [] | [] | https://huggingface.co/datasets/graphs-datasets/artificial-unbalanced-500K/resolve/main/README.md | ---
dataset_info:
features:
- name: edge_index
sequence:
sequence: int64
- name: y
sequence: int64
- name: num_nodes
dtype: int64
splits:
- name: train
num_bytes: 2712963616
num_examples: 499986
download_size: 398809184
dataset_size: 2712963616
---
# Dataset Card for "artificial-unbalanced-500Kb"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Stern5497 | null | null | null | false | 276 | false | Stern5497/scrc_scp | 2022-11-08T17:33:42.000Z | null | false | 943ed5dd9445096165a76f1c2c717f3506aa14bb | [] | [] | https://huggingface.co/datasets/Stern5497/scrc_scp/resolve/main/README.md | annotations_creators:
- machine-generated
language:
- de
- fr
- it
language_creators:
- found
license:
- unknown
multilinguality:
- multilingual
pretty_name: Swiss Criticalyty Prediction for Swiss Supreme Court
size_categories: []
source_datasets:
- original
tags: []
task_categories:
- text-classification
task_ids:
- multi-label-classification
- multi-class-classification |
Andris2067 | null | null | null | false | null | false | Andris2067/Ainava | 2022-11-08T16:14:01.000Z | null | false | bb2672ee1cfd0d5b8ec99ccce7f08a77c0d119b7 | [] | [
"license:creativeml-openrail-m"
] | https://huggingface.co/datasets/Andris2067/Ainava/resolve/main/README.md | ---
license: creativeml-openrail-m
---
|
willjejones | null | null | null | false | 46 | false | willjejones/cutout_men_standing | 2022-11-08T17:23:46.000Z | null | false | 7a908ef6413e1548c13b6650f6d55f9c8303d6d6 | [] | [] | https://huggingface.co/datasets/willjejones/cutout_men_standing/resolve/main/README.md | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 3475830.0
num_examples: 33
download_size: 3470772
dataset_size: 3475830.0
---
# Dataset Card for "cutout_men_standing"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
poppingtonic | null | null | null | false | null | false | poppingtonic/book-dataset | 2022-11-08T21:08:47.000Z | null | false | 6586dd8a9de762b7b8c7ed19b5e1b9feca2df218 | [] | [
"license:afl-3.0"
] | https://huggingface.co/datasets/poppingtonic/book-dataset/resolve/main/README.md | ---
license: afl-3.0
---
|
zahragolpa | null | null | null | false | null | false | zahragolpa/Caltech101 | 2022-11-08T21:34:53.000Z | null | false | 0810deca4374fdadc5c433acebf0d1f8b16c7312 | [] | [
"license:cc"
] | https://huggingface.co/datasets/zahragolpa/Caltech101/resolve/main/README.md | ---
license: cc
---
|
N1ckQt | null | null | null | false | 27 | false | N1ckQt/e926-character-portraits-captions | 2022-11-09T05:55:19.000Z | null | false | 41339399f4ba8e7badaad58f07811ddbd50701cc | [] | [] | https://huggingface.co/datasets/N1ckQt/e926-character-portraits-captions/resolve/main/README.md | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 389186711.85
num_examples: 1575
download_size: 385109469
dataset_size: 389186711.85
---
# Dataset Card for "e926-character-portraits-captions"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
bahidalgo | null | null | null | false | null | false | bahidalgo/Me | 2022-11-08T22:47:56.000Z | null | false | f0425b614beebe3234f5f4256600d56b0d369947 | [] | [
"license:afl-3.0"
] | https://huggingface.co/datasets/bahidalgo/Me/resolve/main/README.md | ---
license: afl-3.0
---
|
lmqg | null | @inproceedings{ushio-etal-2022-generative,
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
} | null | false | 158 | false | lmqg/qa_squadshifts_pseudo | 2022-11-16T18:06:30.000Z | null | false | adec1ebbf1e845ebc4bf97fad9273cfb558d9c07 | [] | [
"arxiv:2210.03992",
"license:cc-by-4.0",
"language:en",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:extended|wikipedia",
"task_categories:question-answering",
"task_ids:extractive-qa"
] | https://huggingface.co/datasets/lmqg/qa_squadshifts_pseudo/resolve/main/README.md | ---
license: cc-by-4.0
pretty_name: Synthetic QA dataset on SQuADShifts.
language: en
multilinguality: monolingual
size_categories: 10K<n<100K
source_datasets:
- extended|wikipedia
task_categories:
- question-answering
task_ids:
- extractive-qa
---
# Dataset Card for "lmqg/qa_squadshifts_pseudo"
## Dataset Description
- **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
- **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
- **Point of Contact:** [Asahi Ushio](http://asahiushio.com/)
### Dataset Summary
This is a synthetic QA dataset generated with fine-tuned QG models over [`lmqg/qa_squadshifts`](https://huggingface.co/datasets/lmqg/qa_squadshifts), made for question-answering based evaluation (QAE) for question generation model proposed by [Zhang and Bansal, 2019](https://aclanthology.org/D19-1253/).
The test split is the original validation set of [`lmqg/qa_squadshifts`](https://huggingface.co/datasets/lmqg/qa_squadshifts), where the model should be evaluate on.
This contains synthetic QA datasets created with the following QG models:
- [lmqg/bart-base-squad](https://huggingface.co/lmqg/bart-base-squad)
- [lmqg/bart-large-squad](https://huggingface.co/lmqg/bart-large-squad)
- [lmqg/t5-small-squad](https://huggingface.co/lmqg/t5-small-squad)
- [lmqg/t5-base-squad](https://huggingface.co/lmqg/t5-base-squad)
- [lmqg/t5-large-squad](https://huggingface.co/lmqg/t5-large-squad)
- [lmqg/t5-small-squad-multitask](https://huggingface.co/lmqg/t5-small-squad-multitask)
- [lmqg/t5-base-squad-multitask](https://huggingface.co/lmqg/t5-base-squad-multitask)
- [lmqg/t5-large-squad-multitask](https://huggingface.co/lmqg/t5-large-squad-multitask)
### Supported Tasks and Leaderboards
* `question-answering`
### Languages
English (en)
## Dataset Structure
### Data Fields
The data fields are the same among all splits.
#### plain_text
- `id`: a `string` feature of id
- `title`: a `string` feature of title of the paragraph
- `context`: a `string` feature of paragraph
- `question`: a `string` feature of question
- `answers`: a `json` feature of answers
### Data Splits
| name | domain | train | validation | test |
|--------------------------|----------|-------|------------|------|
| t5-small-squad | amazon | 3295 | 1648 | 4942 |
| t5-base-squad | amazon | 3295 | 1648 | 4942 |
| t5-large-squad | amazon | 3295 | 1648 | 4942 |
| t5-small-squad-multitask | amazon | 29382 | 14628 | 4942 |
| t5-base-squad-multitask | amazon | 29438 | 14689 | 4942 |
| t5-large-squad-multitask | amazon | 29607 | 14783 | 4942 |
| bart-base-squad | amazon | 3295 | 1648 | 4942 |
| bart-large-squad | amazon | 3295 | 1648 | 4942 |
| t5-small-squad | new_wiki | 2646 | 1323 | 3969 |
| t5-base-squad | new_wiki | 2646 | 1323 | 3969 |
| t5-large-squad | new_wiki | 2646 | 1323 | 3969 |
| t5-small-squad-multitask | new_wiki | 12744 | 6443 | 3969 |
| t5-base-squad-multitask | new_wiki | 12877 | 6525 | 3969 |
| t5-large-squad-multitask | new_wiki | 12949 | 6562 | 3969 |
| bart-base-squad | new_wiki | 2646 | 1323 | 3969 |
| bart-large-squad | new_wiki | 2646 | 1323 | 3969 |
| t5-small-squad | nyt | 3355 | 1678 | 5032 |
| t5-base-squad | nyt | 3355 | 1678 | 5032 |
| t5-large-squad | nyt | 3355 | 1678 | 5032 |
| t5-small-squad-multitask | nyt | 20625 | 10269 | 5032 |
| t5-base-squad-multitask | nyt | 20850 | 10395 | 5032 |
| t5-large-squad-multitask | nyt | 20939 | 10416 | 5032 |
| bart-base-squad | nyt | 3355 | 1678 | 5032 |
| bart-large-squad | nyt | 3355 | 1678 | 5032 |
| t5-small-squad | reddit | 3268 | 1634 | 4901 |
| t5-base-squad | reddit | 3268 | 1634 | 4901 |
| t5-large-squad | reddit | 3268 | 1634 | 4901 |
| t5-small-squad-multitask | reddit | 30485 | 14888 | 4901 |
| t5-base-squad-multitask | reddit | 30655 | 15058 | 4901 |
| t5-large-squad-multitask | reddit | 31147 | 15275 | 4901 |
| bart-base-squad | reddit | 3268 | 1634 | 4901 |
| bart-large-squad | reddit | 3268 | 1634 | 4901 |
## Citation Information
```
@inproceedings{ushio-etal-2022-generative,
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
}
``` |
pacovaldez | null | null | null | false | 35 | false | pacovaldez/stackoverflow-questions | 2022-11-10T00:14:37.000Z | null | false | 869802e52b4dfa074d8a8e255ce85580711cdc25 | [] | [
"annotations_creators:machine-generated",
"language:en",
"language_creators:found",
"license:apache-2.0",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"source_datasets:original",
"tags:stackoverflow",
"tags:technical questions",
"task_categories:text-classification",
"task_ids:multi-class-classification"
] | https://huggingface.co/datasets/pacovaldez/stackoverflow-questions/resolve/main/README.md | ---
annotations_creators:
- machine-generated
language:
- en
language_creators:
- found
license:
- apache-2.0
multilinguality:
- monolingual
pretty_name: stackoverflow_post_questions
size_categories:
- 1M<n<10M
source_datasets:
- original
tags:
- stackoverflow
- technical questions
task_categories:
- text-classification
task_ids:
- multi-class-classification
---
# Dataset Card for [Stackoverflow Post Questions]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Source Data](#source-data)
- [Contributions](#contributions)
## Dataset Description
Companies that sell Open-source software tools usually hire an army of Customer representatives to try to answer every question asked about their tool. The first step in this process
is the prioritization of the question. The classification scale usually consists of 4 values, P0, P1, P2, and P3, with different meanings across every participant in the industry. On
the other hand, every software developer in the world has dealt with Stack Overflow (SO); the amount of shared knowledge there is incomparable to any other website. Questions in SO are
usually annotated and curated by thousands of people, providing metadata about the quality of the question. This dataset aims to provide an accurate prioritization for programming
questions.
### Dataset Summary
The dataset contains the title and body of stackoverflow questions and a label value(0,1,2,3) that was calculated using thresholds defined by SO badges.
### Languages
English
## Dataset Structure
title: string,
body: string,
label: int
### Data Splits
The split is 40/40/20, where classes have been balaned to be around the same size.
## Dataset Creation
The data set was extracted and labeled with the following query in BigQuery:
```
SELECT
title,
body,
CASE
WHEN score >= 100 OR favorite_count >= 100 OR view_count >= 10000 THEN 0
WHEN score >= 25 OR favorite_count >= 25 OR view_count >= 2500 THEN 1
WHEN score >= 10 OR favorite_count >= 10 OR view_count >= 1000 THEN 2
ELSE 3
END AS label
FROM `bigquery-public-data`.stackoverflow.posts_questions
```
### Source Data
The data was extracted from the Big Query public dataset: `bigquery-public-data.stackoverflow.posts_questions`
#### Initial Data Collection and Normalization
The original dataset contained high class imbalance:
label count
0 977424
1 2401534
2 3418179
3 16222990
Grand Total 23020127
The data was sampled from each class to have around the same amount of records on every class.
### Contributions
Thanks to [@pacofvf](https://github.com/pacofvf) for adding this dataset.
|
iuliaturc-personal | null | null | null | false | 23 | false | iuliaturc-personal/rick-and-morty | 2022-11-09T02:44:42.000Z | null | false | e784a9dd1caa90af009343fa342973c3e961bcaf | [] | [] | https://huggingface.co/datasets/iuliaturc-personal/rick-and-morty/resolve/main/README.md | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 8342491.0
num_examples: 113
download_size: 8269815
dataset_size: 8342491.0
---
# Dataset Card for "rick-and-morty"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
jage | null | null | null | false | 26 | false | jage/dataset_from_synthea_for_NER_with_train_val_test_splits | 2022-11-09T02:21:11.000Z | null | false | f42882dca80f8604ea1ee720b24e45079d610a47 | [] | [] | https://huggingface.co/datasets/jage/dataset_from_synthea_for_NER_with_train_val_test_splits/resolve/main/README.md | ---
dataset_info:
features:
- name: id
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
0: O
1: B-DATE
2: I-DATE
3: B-NAME
4: I-NAME
5: B-AGE
6: I-AGE
- name: input_ids
sequence: int32
- name: token_type_ids
sequence: int8
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
splits:
- name: test
num_bytes: 6614328
num_examples: 19176
- name: train
num_bytes: 32139432.0
num_examples: 92300
- name: val
num_bytes: 13463574.0
num_examples: 38138
download_size: 4703482
dataset_size: 52217334.0
---
# Dataset Card for "dataset_from_synthea_for_NER_with_train_val_test_splits"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
skashyap96 | null | null | null | false | null | false | skashyap96/autotrain-data-led-samsum-dialogsum | 2022-11-09T08:45:51.000Z | null | false | 4bf5b5ed178e0e8052b3ec7ea5f7d745ad63cb3b | [] | [] | https://huggingface.co/datasets/skashyap96/autotrain-data-led-samsum-dialogsum/resolve/main/README.md | ---
task_categories:
- conditional-text-generation
---
# AutoTrain Dataset for project: led-samsum-dialogsum
## Dataset Description
This dataset has been automatically processed by AutoTrain for project led-samsum-dialogsum.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"feat_Unnamed: 0": 0,
"feat_id": 0,
"text": "Amanda: I baked cookies. Do you want some?\nJerry: Sure!\nAmanda: I'll bring you tomorrow :-)",
"target": "Amanda baked cookies and will bring Jerry some tomorrow."
},
{
"feat_Unnamed: 0": 1,
"feat_id": 1,
"text": "Olivia: Who are you voting for in this election? \nOliver: Liberals as always.\nOlivia: Me too!!\nOliver: Great",
"target": "Olivia and Olivier are voting for liberals in this election. "
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"feat_Unnamed: 0": "Value(dtype='int64', id=None)",
"feat_id": "Value(dtype='int64', id=None)",
"text": "Value(dtype='string', id=None)",
"target": "Value(dtype='string', id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 27191 |
| valid | 1318 |
|
nlhuong | null | null | null | false | null | false | nlhuong/panda_and_koala | 2022-11-12T10:18:12.000Z | null | false | ef21714574a046223d5e3d0dae6ec3c9d6f7d9c4 | [] | [
"license:artistic-2.0"
] | https://huggingface.co/datasets/nlhuong/panda_and_koala/resolve/main/README.md | ---
license: artistic-2.0
---
|
camenduru | null | null | null | false | 16 | false | camenduru/plushies | 2022-11-09T06:54:54.000Z | null | false | 177029cf50bea30e0a845457f21fcbe761c85018 | [] | [] | https://huggingface.co/datasets/camenduru/plushies/resolve/main/README.md | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 42942055.0
num_examples: 730
download_size: 42653871
dataset_size: 42942055.0
---
# Dataset Card for "plushies"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Nma | null | null | null | false | 4 | false | Nma/resume_dataset_train | 2022-11-09T07:20:47.000Z | null | false | a7d7dedccabae5165972e24bcbd4ef50723db0d7 | [] | [] | https://huggingface.co/datasets/Nma/resume_dataset_train/resolve/main/README.md | ---
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
splits:
- name: train
num_bytes: 2856338396
num_examples: 428365
download_size: 828086360
dataset_size: 2856338396
---
# Dataset Card for "resume_dataset_train"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Nma | null | null | null | false | 3 | false | Nma/resume_dataset_test | 2022-11-09T07:21:01.000Z | null | false | 2d9cb87dc7d013ac635c85ce578fcb53d526a9b5 | [] | [] | https://huggingface.co/datasets/Nma/resume_dataset_test/resolve/main/README.md | ---
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
splits:
- name: test
num_bytes: 714029588
num_examples: 107083
download_size: 207066918
dataset_size: 714029588
---
# Dataset Card for "resume_dataset_test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Sotaro0124 | null | null | null | false | null | false | Sotaro0124/Ainu-Japan_translation_model | 2022-11-09T08:11:39.000Z | null | false | 3fbbcbdb0f6ead4b2933547ceea3729e2dc463c2 | [] | [] | https://huggingface.co/datasets/Sotaro0124/Ainu-Japan_translation_model/resolve/main/README.md |
# Dataset Card for [Dataset Name]
## Table of Contents
[Table of Contents](#table-of-contents)
[Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset. |
aarimond | null | null | null | false | 14 | false | aarimond/test_US-DE | 2022-11-09T08:43:50.000Z | null | false | cab5e62b9d485b386eba9049c09d49ad784f91cd | [] | [] | https://huggingface.co/datasets/aarimond/test_US-DE/resolve/main/README.md | ---
dataset_info:
features:
- name: LEI
dtype: string
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
0: 1HXP
1: 4FSX
2: '8888'
3: '9999'
4: 9ASJ
5: HZEH
6: MIPY
7: QF4W
8: T91T
9: TGMR
10: XTIQ
- name: __index_level_0__
dtype: int64
splits:
- name: test
num_bytes: 814317.4195268975
num_examples: 10948
- name: train
num_bytes: 2850036.5878710384
num_examples: 38317
- name: validation
num_bytes: 407233.0902365513
num_examples: 5475
download_size: 2701863
dataset_size: 4071587.097634487
---
# Dataset Card for "test_US-DE"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
teticio | null | null | null | false | 1 | false | teticio/audio-diffusion-1024 | 2022-11-09T10:49:29.000Z | null | false | 47d0385d3210b59938b3a7cca665abab29eccff4 | [] | [
"size_categories:10K<n<100K",
"tags:audio",
"tags:spectrograms",
"task_categories:image-to-image"
] | https://huggingface.co/datasets/teticio/audio-diffusion-1024/resolve/main/README.md | ---
annotations_creators: []
language: []
language_creators: []
license: []
multilinguality: []
pretty_name: Mel spectrograms of music
size_categories:
- 10K<n<100K
source_datasets: []
tags:
- audio
- spectrograms
task_categories:
- image-to-image
task_ids: []
---
Over 20,000 256x256 mel spectrograms of 5 second samples of music from my Spotify liked playlist. The code to convert from audio to spectrogram and vice versa can be found in https://github.com/teticio/audio-diffusion along with scripts to train and run inference using De-noising Diffusion Probabilistic Models.
```
x_res = 1024
y_res = 1024
sample_rate = 44100
n_fft = 2048
hop_length = 512
``` |
wesleywt | null | null | null | false | 64 | false | wesleywt/zhou_ebola_human | 2022-11-09T09:22:57.000Z | null | false | 5c4e8f1aec1d0567864e8d7fd0c13f47084aaa09 | [] | [] | https://huggingface.co/datasets/wesleywt/zhou_ebola_human/resolve/main/README.md | ---
dataset_info:
features:
- name: is_interaction
dtype: int64
- name: protein_1.id
dtype: string
- name: protein_1.primary
dtype: string
- name: protein_2.id
dtype: string
- name: protein_2.primary
dtype: string
splits:
- name: test
num_bytes: 275414
num_examples: 300
- name: train
num_bytes: 29425605
num_examples: 22682
download_size: 6430757
dataset_size: 29701019
---
# Dataset Card for "zhou_ebola_human"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
wesleywt | null | null | null | false | null | false | wesleywt/zhou_h1n1_human | 2022-11-09T09:37:18.000Z | null | false | 225c714c5b77688cad4b649c7c3fcccafcb4ecf7 | [] | [] | https://huggingface.co/datasets/wesleywt/zhou_h1n1_human/resolve/main/README.md | ---
dataset_info:
features:
- name: is_interaction
dtype: int64
- name: protein_1.id
dtype: string
- name: protein_1.primary
dtype: string
- name: protein_2.id
dtype: string
- name: protein_2.primary
dtype: string
splits:
- name: test
num_bytes: 723379
num_examples: 762
- name: train
num_bytes: 28170698
num_examples: 21716
download_size: 12309236
dataset_size: 28894077
---
# Dataset Card for "zhou_h1n1_human"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
wesleywt | null | null | null | false | null | false | wesleywt/williams_mtb_hpidb | 2022-11-09T09:50:16.000Z | null | false | 73bb31ac9151c2afe2dbcf1165d916927f78b0c8 | [] | [] | https://huggingface.co/datasets/wesleywt/williams_mtb_hpidb/resolve/main/README.md | ---
dataset_info:
features:
- name: is_interaction
dtype: int64
- name: protein_1.id
dtype: string
- name: protein_1.primary
dtype: string
- name: protein_2.id
dtype: string
- name: protein_2.primary
dtype: string
splits:
- name: test
num_bytes: 5138954
num_examples: 4192
- name: train
num_bytes: 19964860
num_examples: 16768
download_size: 16427398
dataset_size: 25103814
---
# Dataset Card for "williams_mtb_hpidb"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
dreamproit | null | null | null | false | null | false | dreamproit/bill_summary_us | 2022-11-09T20:01:15.000Z | null | false | 802bf20080d478fd178c3e3268530ee76ceb15ad | [] | [
"annotations_creators:expert-generated",
"language:en",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"tags:bills",
"task_categories:summarization"
] | https://huggingface.co/datasets/dreamproit/bill_summary_us/resolve/main/README.md | ---
annotations_creators:
- expert-generated
language:
- en
language_creators:
- expert-generated
license: []
multilinguality:
- monolingual
pretty_name: bill_summarization
size_categories:
- 1K<n<10K
source_datasets:
- original
tags:
- bills
task_categories:
- summarization
task_ids: []
---
# Dataset Card for "bill_summarization"
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://github.com/dreamproit/BillML
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Leaderboard:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Dataset Summary
Dataset for summarization of summarization of US Congressional bills (bill_summarization).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
English
## Dataset Structure
### Data Instances
#### default
- **Size of downloaded dataset files:** 186 MB
- **Total amount of disk used:** 177 MB
### Data Fields
- id: id of the bill.
- sections: list of bill sections with section_id and text.
- text: bill text.
- text_len: number of characters in the text.
- summary: summary of the bill.
- summary_len: number of characters in the summary.
- title: official title of the bill.
### Data Splits
No splits.
## Dataset Creation
### Curation Rationale
Bills (proposed laws) are specialized, structured documents with great public significance. Often, the language of a bill may not directly explain the potential impact of the legislation. For bills in the U.S. Congress, the Congressional Research Service of the Library of Congress provides professional, non-partisan summaries of bills. These are valuable for public understanding of the bills and are serve as an essential part of the lawmaking process to understand the meaning and potential legislative impact.
This dataset collects the text of bills, some metadata, as well as the CRS summaries. In order to build more accurate ML models for bill summarization it is important to have a clean dataset, alongside the professionally-written CRS summaries. ML summarization models built on generic data are bound to produce less accurate results (sometimes creating summaries that describe the opposite of a bill's actual effect). In addition, models that attempt to summarize all bills (some of which may reach 4000 pages long) may also be inaccurate due to the current limitations of summarization on long texts.
As a result, this dataset collects bill and summary information for only small bills (10 sections or fewer). It is meant as a starting point for community-driven development of ML models for bill summarization. In the future, we may expand or enhance the dataset in a number of ways-- adding metadata, including larger bills, and providing feedback from expert legislative analysts on any automated summaries that are produced.
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
The data consists of the US congress bills that were collected from the [Govinfo](https://github.com/unitedstates/congress) service provided by the United States Government Publishing Office (GPO) under CC0-1.0 license.
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
dreamproit.com
### Licensing Information
Bill and summary information are public and are unlicensed, as it is data produced by government entities. The collection and enhancement work that we provide for this dataset, to the degree it may be covered by copyright, is released under CC0 (https://creativecommons.org/share-your-work/public-domain/cc0/)
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@BorodaUA](https://github.com/BorodaUA), [@alexbojko](https://github.com/alexbojko) for adding this dataset. |
JohnnyBoy00 | null | null | null | false | null | false | JohnnyBoy00/saf_legal_domain_german | 2022-11-15T10:44:30.000Z | null | false | cbf1aa70e24e1a2f268663d13236f4d22d7fba97 | [] | [
"annotations_creators:expert-generated",
"language:de",
"language_creators:other",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"tags:short answer feedback",
"tags:legal domain",
"task_categories:text2text-generation"
] | https://huggingface.co/datasets/JohnnyBoy00/saf_legal_domain_german/resolve/main/README.md | ---
pretty_name: SAF - Legal Domain - German
annotations_creators:
- expert-generated
language:
- de
language_creators:
- other
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
tags:
- short answer feedback
- legal domain
task_categories:
- text2text-generation
dataset_info:
features:
- name: id
dtype: string
- name: question
dtype: string
- name: reference_answer
dtype: string
- name: provided_answer
dtype: string
- name: answer_feedback
dtype: string
- name: verification_feedback
dtype: string
- name: error_class
dtype: string
- name: score
dtype: float64
splits:
- name: train
num_bytes: 2223070
num_examples: 1596
- name: validation
num_bytes: 546759
num_examples: 400
- name: test_unseen_answers
num_bytes: 309580
num_examples: 221
- name: test_unseen_questions
num_bytes: 360672
num_examples: 275
download_size: 455082
dataset_size: 3440081
---
# Dataset Card for "saf_legal_domain_german"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Additional Information](#additional-information)
- [Contributions](#contributions)
## Dataset Description
### Dataset Summary
This Short Answer Feedback (SAF) dataset contains 19 German questions in the domain of the German social law (with reference answers). The idea of constructing a bilingual (English and German) short answer dataset as a way to remedy the lack of content-focused feedback datasets was introduced in [Your Answer is Incorrect... Would you like to know why? Introducing a Bilingual Short Answer Feedback Dataset](https://aclanthology.org/2022.acl-long.587) (Filighera et al., ACL 2022). Please refer to [saf_legal_domain_german](https://huggingface.co/datasets/JohnnyBoy00/saf_micro_job_german) and [saf_communication_networks_english](https://huggingface.co/datasets/JohnnyBoy00/saf_communication_networks_english) for similarly constructed datasets that can be used for SAF tasks.
### Supported Tasks and Leaderboards
- `short_answer_feedback`: The dataset can be used to train a Text2Text Generation model from HuggingFace transformers in order to generate automatic short answer feedback.
### Languages
The questions, reference answers, provided answers and the answer feedback in the dataset are written in German.
## Dataset Structure
### Data Instances
An example of an entry of the training split looks as follows.
```
{
"id": "1",
"question": "Ist das eine Frage?",
"reference_answer": "Ja, das ist eine Frage.",
"provided_answer": "Ich bin mir sicher, dass das eine Frage ist.",
"answer_feedback": "Korrekt.",
"verification_feedback": "Correct",
"error_class": "Keine",
"score": 1
}
```
### Data Fields
The data fields are the same among all splits.
- `id`: a `string` feature (UUID4 in HEX format).
- `question`: a `string` feature representing a question.
- `reference_answer`: a `string` feature representing a reference answer to the question.
- `provided_answer`: a `string` feature representing an answer that was provided for a particular question.
- `answer_feedback`: a `string` feature representing the feedback given to the provided answers.
- `verification_feedback`: a `string` feature representing an automatic labeling of the score. It can be `Correct` (`score` = 1), `Incorrect` (`score` = 0) or `Partially correct` (all intermediate scores).
- `error_class`: a `string` feature representing the type of error identified in the case of a not completely correct answer.
- `score`: a `float64` feature (between 0 and 1) representing the score given to the provided answer.
### Data Splits
The dataset is comprised of four data splits.
- `train`: used for training, contains a set of questions and the provided answers to them.
- `validation`: used for validation, contains a set of questions and the provided answers to them (derived from the original training set from which the data came from).
- `test_unseen_answers`: used for testing, contains unseen answers to the questions present in the `train` split.
- `test_unseen_questions`: used for testing, contains unseen questions that do not appear in the `train` split.
| Split |train|validation|test_unseen_answers|test_unseen_questions|
|-------------------|----:|---------:|------------------:|--------------------:|
|Number of instances| 1596| 400| 221| 275|
## Additional Information
### Contributions
Thanks to [@JohnnyBoy2103](https://github.com/JohnnyBoy2103) for adding this dataset. |
teticio | null | null | null | false | 6 | false | teticio/audio-diffusion-512 | 2022-11-09T10:50:22.000Z | null | false | 17235b5ecbf7d15c58c03d0f0bbbf54aec0639b2 | [] | [
"size_categories:10K<n<100K",
"tags:audio",
"tags:spectrograms",
"task_categories:image-to-image"
] | https://huggingface.co/datasets/teticio/audio-diffusion-512/resolve/main/README.md | ---
annotations_creators: []
language: []
language_creators: []
license: []
multilinguality: []
pretty_name: Mel spectrograms of music
size_categories:
- 10K<n<100K
source_datasets: []
tags:
- audio
- spectrograms
task_categories:
- image-to-image
task_ids: []
---
Over 20,000 256x256 mel spectrograms of 5 second samples of music from my Spotify liked playlist. The code to convert from audio to spectrogram and vice versa can be found in https://github.com/teticio/audio-diffusion along with scripts to train and run inference using De-noising Diffusion Probabilistic Models.
```
x_res = 512
y_res = 512
sample_rate = 22050
n_fft = 2048
hop_length = 512
``` |
Zicara | null | null | null | false | null | false | Zicara/Hands_11k | 2022-11-15T09:11:22.000Z | null | false | 37936dd5fc7d972c40942d2f373d17d0109335a9 | [] | [
"license:unknown"
] | https://huggingface.co/datasets/Zicara/Hands_11k/resolve/main/README.md | ---
license: unknown
---
|
pietrolesci | null | null | null | false | null | false | pietrolesci/multiwoz_all_versions | 2022-11-10T11:50:53.000Z | null | false | 98f2b57b8be4e53c21ae981fd42495055004294b | [] | [] | https://huggingface.co/datasets/pietrolesci/multiwoz_all_versions/resolve/main/README.md | This dataset is based on the "cumulative" configuration of the MultiWoz 2.2 dataset available also on the [HuggingFace Hub](https://huggingface.co/datasets/multi_woz_v22).
Therefore, the system and user utterances, the active intents, and the services are exactly the same.
In addition to the data present in version 2.2, this dataset contains, for each dialogue turn, the annotations from versions 2.1, 2.3, and 2.4.
NOTE:
- Each dialogue turn is composed of a system utterance and a user utterance, in this exact order
- The initial system utterance is filled in with the `none` string
- In the last dialogue turn is always the system that greets the user; this last turn is kept and the user utterance is filled in with the `none` string (usually during evaluation this dialogue turn is not considered)
- To be able to save data as an arrow file you need to "pad" the states to all have the same keys. To do this the None value is introduced. Therefore, when you load it back it is convenient to have a way to remove the "padding". In order to do so, a function like the following can help
```python
def remove_empty_slots(state: Union[Dict[str, Union[List[str], None]], None]) -> Union[Dict[str, List[str]], None]:
if state is None:
return None
return {k: v for k, v in state.items() if v is not None}
```
- The schema has been updated to make all the versions compatible. Basically, the "book" string has been removed from slots in v2.2. The updated schema is the following
```yaml
attraction-area
attraction-name
attraction-type
hotel-area
hotel-day
hotel-internet
hotel-name
hotel-parking
hotel-people
hotel-pricerange
hotel-stars
hotel-stay
hotel-type
restaurant-area
restaurant-day
restaurant-food
restaurant-name
restaurant-people
restaurant-pricerange
restaurant-time
taxi-arriveby
taxi-departure
taxi-destination
taxi-leaveat
train-arriveby
train-day
train-departure
train-destination
train-leaveat
train-people
``` |
davanstrien | null | null | null | false | 4 | false | davanstrien/hugitnovtest | 2022-11-09T11:29:28.000Z | null | true | 23f55e9f9b9138473e2680615c4a980586ffee6e | [] | [] | https://huggingface.co/datasets/davanstrien/hugitnovtest/resolve/main/README.md | |
andreotte | null | null | null | false | 19 | false | andreotte/multi-label-classification-test | 2022-11-09T12:42:54.000Z | null | false | c9d83173de7024e112c2d0c815fb0c2b1301dc1e | [] | [] | https://huggingface.co/datasets/andreotte/multi-label-classification-test/resolve/main/README.md | ---
dataset_info:
features:
- name: label
dtype:
class_label:
names:
0: Door
1: Eaves
2: Gutter
3: Vegetation
4: Vent
5: Window
- name: pixel_values
dtype: image
splits:
- name: test
num_bytes: 9476052.0
num_examples: 151
- name: train
num_bytes: 82422534.7
num_examples: 1315
download_size: 91894615
dataset_size: 91898586.7
---
# Dataset Card for "multi-label-classification-test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
mboth | null | null | null | false | 6 | false | mboth/klassifizierung_gewerke_hamburg_object_types | 2022-11-09T12:54:01.000Z | null | false | b029e42220412eac35591b2d39c5615d63395da8 | [] | [] | https://huggingface.co/datasets/mboth/klassifizierung_gewerke_hamburg_object_types/resolve/main/README.md | ---
dataset_info:
features:
- name: Beschreibung
dtype: string
- name: Name
dtype: string
- name: Datatype
dtype: string
- name: Unit
dtype: string
- name: label
dtype:
class_label:
names:
0: Abwasser-Wasser-Gasanlagen
1: Andere_Anlagen
2: Lufttechnische_Anlagen
3: Sichern
4: Starkstromanlagen
5: Wärmeversorgungsanlagen
- name: text
dtype: string
splits:
- name: test
num_bytes: 18551.703337453648
num_examples: 81
- name: train
num_bytes: 148184.59332509272
num_examples: 647
- name: valid
num_bytes: 18551.703337453648
num_examples: 81
download_size: 53166
dataset_size: 185288.00000000003
---
# Dataset Card for "klassifizierung_gewerke_hamburg_object_types"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Rahaneg | null | null | null | false | 8 | false | Rahaneg/opdQA | 2022-11-10T03:16:48.000Z | null | false | 6bd93f58710308b5e09fd788a8c9585fe20fe4c6 | [] | [] | https://huggingface.co/datasets/Rahaneg/opdQA/resolve/main/README.md | |
loubnabnl | null | null | null | false | null | false | loubnabnl/dummy_data_clean | 2022-11-09T17:05:43.000Z | null | false | 75b569b006880d60ccd260a7f9492309f2bd7e5e | [] | [] | https://huggingface.co/datasets/loubnabnl/dummy_data_clean/resolve/main/README.md | ---
dataset_info:
features:
- name: content
dtype: string
- name: language
dtype: string
- name: license
dtype: string
- name: path
dtype: string
- name: annotation_id
dtype: string
- name: pii
dtype: string
- name: pii_modified
dtype: string
splits:
- name: train
num_bytes: 3808098.717948718
num_examples: 400
download_size: 1311649
dataset_size: 3808098.717948718
---
# Dataset Card for "dummy_data_clean"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
rafaelmotac | null | null | null | false | null | false | rafaelmotac/rafaelcorreia | 2022-11-09T22:39:48.000Z | null | false | b5742c509417def7094c043d94a9c311b1d63b8e | [] | [] | https://huggingface.co/datasets/rafaelmotac/rafaelcorreia/resolve/main/README.md | My photos to train AI |
ScandEval | null | null | null | false | 53 | false | ScandEval/swerec-mini | 2022-11-09T18:16:20.000Z | null | false | 6212acac76dda6a550bd1e509ee4c0e6dccb5dee | [] | [] | https://huggingface.co/datasets/ScandEval/swerec-mini/resolve/main/README.md | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype: string
splits:
- name: test
num_bytes: 713970
num_examples: 2048
- name: train
num_bytes: 355633
num_examples: 1024
- name: val
num_bytes: 82442
num_examples: 256
download_size: 684710
dataset_size: 1152045
---
# Dataset Card for "swerec-mini"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
muhammadbilal5110 | null | null | null | false | null | false | muhammadbilal5110/indian_food_images | 2022-11-09T18:20:32.000Z | null | false | 0172a82241343327a319f1afa42957039e6ab9b4 | [] | [] | https://huggingface.co/datasets/muhammadbilal5110/indian_food_images/resolve/main/README.md | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
0: burger
1: butter_naan
2: chai
3: chapati
4: chole_bhature
5: dal_makhani
6: dhokla
7: fried_rice
8: idli
9: jalebi
10: kaathi_rolls
11: kadai_paneer
12: kulfi
13: masala_dosa
14: momos
15: paani_puri
16: pakode
17: pav_bhaji
18: pizza
19: samosa
splits:
- name: test
num_bytes: -50510587.406603925
num_examples: 941
- name: train
num_bytes: -283960930.24139607
num_examples: 5328
download_size: 1600880763
dataset_size: -334471517.648
---
# Dataset Card for "indian_food_images"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
lmqg | null | @inproceedings{ushio-etal-2022-generative,
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
} | null | false | null | false | lmqg/qa_harvesting_from_wikipedia_pseudo | 2022-11-10T11:30:06.000Z | null | false | bac3f20df77a27858495b76880121c1e9531d9c7 | [] | [
"arxiv:2210.03992",
"license:cc-by-4.0",
"language:en",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:extended|wikipedia",
"task_categories:question-answering",
"task_ids:extractive-qa"
] | https://huggingface.co/datasets/lmqg/qa_harvesting_from_wikipedia_pseudo/resolve/main/README.md | ---
license: cc-by-4.0
pretty_name: Synthetic QA dataset.
language: en
multilinguality: monolingual
size_categories: 10K<n<100K
source_datasets:
- extended|wikipedia
task_categories:
- question-answering
task_ids:
- extractive-qa
---
# Dataset Card for "lmqg/qa_harvesting_from_wikipedia_pseudo"
## Dataset Description
- **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
- **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
- **Point of Contact:** [Asahi Ushio](http://asahiushio.com/)
### Dataset Summary
This is a synthetic QA dataset generated with fine-tuned QG models over [`lmqg/qa_harvesting_from_wikipedia`](https://huggingface.co/datasets/lmqg/qa_harvesting_from_wikipedia), 1 million paragraph and answer pairs collected in [Du and Cardie, 2018](https://aclanthology.org/P18-1177/), made for question-answering based evaluation (QAE) for question generation model proposed by [Zhang and Bansal, 2019](https://aclanthology.org/D19-1253/).
The `train` split is the synthetic data and the `validation` split is the original validation set of [SQuAD](https://rajpurkar.github.io/SQuAD-explorer/), where the model should be evaluate on.
This contains synthetic QA datasets created with the following QG models:
- [lmqg/bart-base-squad](https://huggingface.co/lmqg/bart-base-squad)
- [lmqg/bart-large-squad](https://huggingface.co/lmqg/bart-large-squad)
- [lmqg/t5-small-squad](https://huggingface.co/lmqg/t5-small-squad)
- [lmqg/t5-base-squad](https://huggingface.co/lmqg/t5-base-squad)
- [lmqg/t5-large-squad](https://huggingface.co/lmqg/t5-large-squad)
See more detail about the QAE at [https://github.com/asahi417/lm-question-generation/tree/master/misc/qa_based_evaluation](https://github.com/asahi417/lm-question-generation/tree/master/misc/emnlp_2022/qa_based_evaluation).
### Supported Tasks and Leaderboards
* `question-answering`
### Languages
English (en)
## Dataset Structure
### Data Fields
The data fields are the same among all splits.
#### plain_text
- `id`: a `string` feature of id
- `title`: a `string` feature of title of the paragraph
- `context`: a `string` feature of paragraph
- `question`: a `string` feature of question
- `answers`: a `json` feature of answers
### Data Splits
|train |validation|
|--------:|---------:|
|1,092,142| 10,570 |
## Citation Information
```
@inproceedings{ushio-etal-2022-generative,
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
}
``` |
dreamproit | null | null | null | false | null | false | dreamproit/bill_summary | 2022-11-10T08:18:27.000Z | null | false | e431cd6f537d0c97e854ed2137f4f996d49af5c5 | [] | [] | https://huggingface.co/datasets/dreamproit/bill_summary/resolve/main/README.md | More information comming soon. |
dreamproit | null | null | null | false | null | false | dreamproit/bill_summary_ua | 2022-11-10T08:18:05.000Z | null | false | 5eb17d96da67cef7250294e82b6a55ea81dcd5d6 | [] | [] | https://huggingface.co/datasets/dreamproit/bill_summary_ua/resolve/main/README.md | More information comming soon. |
NosaOmer | null | null | null | false | null | false | NosaOmer/arnosa | 2022-11-09T20:14:33.000Z | null | false | c9f2148409945b463a4ec616f74e3d193bde1c64 | [] | [
"license:cc-by-sa-4.0"
] | https://huggingface.co/datasets/NosaOmer/arnosa/resolve/main/README.md | ---
license: cc-by-sa-4.0
---
|
pszemraj | null | null | null | false | 64 | false | pszemraj/text2image-multi-prompt | 2022-11-14T16:04:12.000Z | null | false | debccb3dbfb8023078edd4d9999b25849edfd1f3 | [] | [
"license:apache-2.0",
"language:en",
"multilinguality:monolingual",
"source_datasets:bartman081523/stable-diffusion-discord-prompts",
"source_datasets:succinctly/midjourney-prompts",
"source_datasets:Gustavosta/Stable-Diffusion-Prompts",
"tags:text generation"
] | https://huggingface.co/datasets/pszemraj/text2image-multi-prompt/resolve/main/README.md | ---
license: apache-2.0
language:
- en
multilinguality:
- monolingual
pretty_name: multi text2image prompts a dataset collection
source_datasets:
- bartman081523/stable-diffusion-discord-prompts
- succinctly/midjourney-prompts
- Gustavosta/Stable-Diffusion-Prompts
tags:
- text generation
---
# text2image multi-prompt(s): a dataset collection
- collection of several text2image prompt datasets
- data was cleaned/normalized with the goal of removing "model specific APIs" like the "--ar" for Midjourney and so on
- data de-duplicated on a basic level: exactly duplicate prompts were dropped (_after cleaning and normalization_)
## contents
```
DatasetDict({
train: Dataset({
features: ['text', 'src_dataset'],
num_rows: 3551734
})
test: Dataset({
features: ['text', 'src_dataset'],
num_rows: 399393
})
})
```
_NOTE: as the other two datasets did not have a `validation` split, the validation split of `succinctly/midjourney-prompts` was merged into `train`._ |
nateraw | null | null | null | false | 2 | false | nateraw/quick-captioning-dataset-test | 2022-11-09T23:20:40.000Z | null | false | 3afe16b210dec396ba32a4c4669a951a13c8d1c0 | [] | [] | https://huggingface.co/datasets/nateraw/quick-captioning-dataset-test/resolve/main/README.md | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 345244.0
num_examples: 4
download_size: 0
dataset_size: 345244.0
---
# Dataset Card for "quick-captioning-dataset-test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
treksis | null | null | null | false | 3 | false | treksis/test_pinkeyrepo | 2022-11-10T00:01:25.000Z | null | false | 379266b9d42eae2923d3bb4e2fa5e9e4cdc608fe | [] | [] | https://huggingface.co/datasets/treksis/test_pinkeyrepo/resolve/main/README.md | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 906786.0
num_examples: 5
download_size: 908031
dataset_size: 906786.0
---
# Dataset Card for "test_pinkeyrepo"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Ngadou | null | null | null | false | null | false | Ngadou/Spam_SMS | 2022-11-10T09:06:25.000Z | null | false | ae03d5b8fc12f95b1b965ef6f3fabf29b6eaf2a8 | [] | [
"license:cc"
] | https://huggingface.co/datasets/Ngadou/Spam_SMS/resolve/main/README.md | ---
license: cc
---
## Description
The Spam SMS is a set of SMS-tagged messages that have been collected for SMS Spam research. It contains one set of SMS messages in English of 5,574 messages, tagged according to being ham (legitimate) or spam.
Source: [uciml/sms-spam-collection-dataset](https://www.kaggle.com/datasets/uciml/sms-spam-collection-dataset) |
dvitel | null | null | null | false | 20 | false | dvitel/geo | 2022-11-10T00:50:17.000Z | null | false | eddcf0f010fb54164d0ff44402da8be69ac3684b | [] | [
"annotations_creators:no-annotation",
"language:en",
"language_creators:expert-generated",
"license:gpl-2.0",
"multilinguality:other-en-prolog",
"size_categories:n<1K",
"source_datasets:original",
"tags:geo",
"tags:prolog",
"tags:semantic-parsing",
"tags:code-generation",
"task_categories:text-generation",
"task_categories:text2text-generation",
"task_ids:language-modeling",
"task_ids:explanation-generation"
] | https://huggingface.co/datasets/dvitel/geo/resolve/main/README.md | ---
annotations_creators:
- no-annotation
language:
- en
language_creators:
- expert-generated
license:
- gpl-2.0
multilinguality:
- other-en-prolog
pretty_name: GEO - semantic parsing to Geography Prolog queries
size_categories:
- n<1K
source_datasets:
- original
tags:
- geo
- prolog
- semantic-parsing
- code-generation
task_categories:
- text-generation
- text2text-generation
task_ids:
- language-modeling
- explanation-generation
---
Dataset contains queries for Problog database of facts about USA geography. Taken from [this source](https://www.cs.utexas.edu/users/ml/nldata/geoquery.html) |
dvitel | null | null | null | false | 1 | false | dvitel/hearthstone | 2022-11-10T01:24:14.000Z | null | false | fe7cf7c231bfd0366e56ed6242d1421d23483e1d | [] | [
"language:en",
"license:mit",
"multilinguality:other-en-python",
"size_categories:n<1K",
"tags:code-synthesis",
"tags:semantic-parsing",
"tags:python",
"tags:hearthstone",
"task_categories:text-generation",
"task_ids:language-modeling"
] | https://huggingface.co/datasets/dvitel/hearthstone/resolve/main/README.md | ---
annotations_creators: []
language:
- en
language_creators: []
license:
- mit
multilinguality:
- other-en-python
pretty_name: HEARTHSTONE - synthesis of python code for card game descriptions
size_categories:
- n<1K
source_datasets: []
tags:
- code-synthesis
- semantic-parsing
- python
- hearthstone
task_categories:
- text-generation
task_ids:
- language-modeling
---
Datasets for HEARTHSTONE card game. Taken from [this source](https://github.com/deepmind/card2code/tree/master/third_party/hearthstone)
|
FAERS-PubMed | null | null | null | false | 86 | false | FAERS-PubMed/full-dataset-latest | 2022-11-10T18:39:23.000Z | null | false | 9dec58186b1cb4f113e2b5ac41808f9a90be0e6b | [] | [] | https://huggingface.co/datasets/FAERS-PubMed/full-dataset-latest/resolve/main/README.md | ---
dataset_info:
features:
- name: article_articletitle
dtype: string
- name: article_pmid
dtype: string
- name: article_abstract
dtype: string
- name: article_authorlist
list:
- name: CollectiveName
dtype: string
- name: ForeName
dtype: string
- name: Initials
dtype: string
- name: LastName
dtype: string
- name: Suffix
dtype: string
- name: article_journalinfo
dtype: string
- name: article_datecompleted
dtype: string
- name: article_daterevised
dtype: string
- name: article_pubmed_filename
dtype: string
- name: report_literaturereference
dtype: string
- name: report_safetyreportid
dtype: string
- name: report_receivedate
dtype: string
- name: report_patient
struct:
- name: drug
list:
- name: actiondrug
dtype: string
- name: activesubstance
struct:
- name: activesubstancename
dtype: string
- name: drugadditional
dtype: string
- name: drugadministrationroute
dtype: string
- name: drugauthorizationnumb
dtype: string
- name: drugbatchnumb
dtype: string
- name: drugcharacterization
dtype: string
- name: drugcumulativedosagenumb
dtype: string
- name: drugcumulativedosageunit
dtype: string
- name: drugdosageform
dtype: string
- name: drugdosagetext
dtype: string
- name: drugenddate
dtype: string
- name: drugenddateformat
dtype: string
- name: drugindication
dtype: string
- name: drugintervaldosagedefinition
dtype: string
- name: drugintervaldosageunitnumb
dtype: string
- name: drugrecurreadministration
dtype: string
- name: drugseparatedosagenumb
dtype: string
- name: drugstartdate
dtype: string
- name: drugstartdateformat
dtype: string
- name: drugstructuredosagenumb
dtype: string
- name: drugstructuredosageunit
dtype: string
- name: drugtreatmentduration
dtype: string
- name: drugtreatmentdurationunit
dtype: string
- name: medicinalproduct
dtype: string
- name: patientagegroup
dtype: string
- name: patientonsetage
dtype: string
- name: patientonsetageunit
dtype: string
- name: patientsex
dtype: string
- name: patientweight
dtype: string
- name: reaction
list:
- name: reactionmeddrapt
dtype: string
- name: reactionmeddraversionpt
dtype: string
- name: reactionoutcome
dtype: string
- name: summary
struct:
- name: narrativeincludeclinical
dtype: string
- name: report_transmissiondate
dtype: string
- name: report_seriousness
struct:
- name: serious
dtype: string
- name: seriousnesscongenitalanomali
dtype: string
- name: seriousnessdeath
dtype: string
- name: seriousnessdisabling
dtype: string
- name: seriousnesshospitalization
dtype: string
- name: seriousnesslifethreatening
dtype: string
- name: seriousnessother
dtype: string
- name: report_faers_filename
dtype: string
- name: label_seriousness_serious
dtype:
class_label:
names:
0: '0'
1: '1'
2: '2'
splits:
- name: test
num_bytes: 262533260
num_examples: 103646
- name: train
num_bytes: 1115190268
num_examples: 483665
- name: validation
num_bytes: 156297059
num_examples: 65856
download_size: 576165810
dataset_size: 1534020587
---
# Dataset Card for "full-dataset-latest"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
andyyang | null | null | null | false | null | false | andyyang/stable_diffusion_prompts_2m | 2022-11-10T06:38:10.000Z | null | false | 904ada614d1d3dd374dd4752730b0db9017334df | [] | [
"license:cc0-1.0"
] | https://huggingface.co/datasets/andyyang/stable_diffusion_prompts_2m/resolve/main/README.md | ---
license: cc0-1.0
---
# Stable Diffusion Prompts 200m
Because Diffusion-DB dataset is too big. So I extracted the prompts out for prompt study.
The file introduction:
- sd_promts_2m.txt : the main dataset.
- sd_top5000.keywords.tsv: the top 5000 frequent key words or phrase.
- |
kakaobrain | null | null | null | false | 1 | false | kakaobrain/coyo-labeled-300m | 2022-11-11T01:11:22.000Z | null | false | 8d62a7d805261fc2ffd233a4f31e33049d87eec4 | [] | [
"arxiv:2010.11929",
"annotations_creators:no-annotation",
"language:en",
"language_creators:other",
"license:cc-by-4.0",
"multilinguality:monolingual",
"size_categories:100M<n<1B",
"source_datasets:original",
"tags:image-labeled pairs",
"task_categories:image-classification",
"task_ids:multi-label-image-classification"
] | https://huggingface.co/datasets/kakaobrain/coyo-labeled-300m/resolve/main/README.md |
---
annotations_creators:
- no-annotation
language:
- en
language_creators:
- other
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: COYO-Labeled-300M
size_categories:
- 100M<n<1B
source_datasets:
- original
tags:
- image-labeled pairs
task_categories:
- image-classification
task_ids:
- multi-label-image-classification
---
# Dataset Card for COYO-Labeled-300M
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [COYO homepage](https://kakaobrain.com/contents/?contentId=7eca73e3-3089-43cb-b701-332e8a1743fd)
- **Repository:** [COYO repository](https://github.com/kakaobrain/coyo-dataset)
- **Paper:**
- **Leaderboard:**
- **Point of Contact:** [COYO email](coyo@kakaobrain.com)
### Dataset Summary
**COYO-Labeled-300M** is a dataset of **machine-labeled** 300M images-multi-label pairs. We labeled subset of COYO-700M with a large model (efficientnetv2-xl) trained on imagenet-21k. We followed the same evaluation pipeline as in efficientnet-v2. The labels are top 50 most likely labels out of 21,841 classes from imagenet-21k. The label probabilies are provided rather than label so that the user can select threshold of their choice for multi-label classification use or can take top-1 class for single class classification use.
In other words, **COYO-Labeled-300M** is a ImageNet-like dataset. Instead of human labeled 1.25 million samples, it's machine-labeled 300 million samples. This dataset is similar to JFT-300M which is not released to the public.
### Supported Tasks and Leaderboards
We empirically validated the quality of COYO-Labeled-300M dataset by re-implementing popular model, [ViT](https://arxiv.org/abs/2010.11929).
We found that our ViT implementation trained on COYO-Labeled-300M performs similar to the performance numbers in the ViT paper trained on JFT-300M.
We also provide weights for the pretrained ViT model on COYO-Labeled-300M as well as its training & fine-tuning code.
### Languages
The labels in the COYO-Labeled-300M dataset consist of English.
## Dataset Structure
### Data Instances
Each instance in COYO-Labeled-300M represents multi-labels and image pair information with meta-attributes.
And we also provide label information, **imagenet21k_tree.pickle**.
```
{
'id': 315,
'url': 'https://a.1stdibscdn.com/pair-of-blue-and-white-table-lamps-for-sale/1121189/f_121556431538206028457/12155643_master.jpg?width=240',
'imagehash': 'daf5a50aae4aa54a',
'labels': [8087, 11054, 8086, 6614, 6966, 8193, 10576, 9710, 4334, 9909, 8090, 10104, 10105, 9602, 5278, 9547, 6978, 12011, 7272, 5273, 6279, 4279, 10903, 8656, 9601, 8795, 9326, 4606, 9907, 9106, 7574, 10006, 7257, 6959, 9758, 9039, 10682, 7164, 5888, 11654, 8201, 4546, 9238, 8197, 10882, 17380, 4470, 5275, 10537, 11548],
'label_probs': [0.4453125, 0.30419921875, 0.09417724609375, 0.033905029296875, 0.03240966796875, 0.0157928466796875, 0.01406097412109375, 0.01129150390625, 0.00978851318359375, 0.00841522216796875, 0.007720947265625, 0.00634002685546875, 0.0041656494140625, 0.004070281982421875, 0.002910614013671875, 0.0028018951416015625, 0.002262115478515625, 0.0020503997802734375, 0.0017080307006835938, 0.0016880035400390625, 0.0016679763793945312, 0.0016613006591796875, 0.0014324188232421875, 0.0012445449829101562, 0.0011739730834960938, 0.0010318756103515625, 0.0008969306945800781, 0.0008792877197265625, 0.0008726119995117188, 0.0008263587951660156, 0.0007123947143554688, 0.0006799697875976562, 0.0006561279296875, 0.0006542205810546875, 0.0006093978881835938, 0.0006046295166015625, 0.0005769729614257812, 0.00057220458984375, 0.0005636215209960938, 0.00055694580078125, 0.0005092620849609375, 0.000507354736328125, 0.000507354736328125, 0.000499725341796875, 0.000484466552734375, 0.0004456043243408203, 0.0004439353942871094, 0.0004355907440185547, 0.00043392181396484375, 0.00041866302490234375],
'width': 240,
'height': 240
}
```
### Data Fields
| name | type | description |
|--------------------------|---------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| id | long | Unique 64-bit integer ID generated by [monotonically_increasing_id()](https://spark.apache.org/docs/3.1.3/api/python/reference/api/pyspark.sql.functions.monotonically_increasing_id.html) which is the same value that is mapped with the existing COYO-700M. |
| url | string | The image URL extracted from the `src` attribute of the `<img>` |
| imagehash | string | The [perceptual hash(pHash)](http://www.phash.org/) of the image |
| labels | sequence[integer] | Inference results of EfficientNetV2-XL model trained on ImageNet-21K dataset (Top 50 indices among 21,841 classes) |
| label_probs | sequence[float] | Inference results of EfficientNetV2-XL model trained on ImageNet-21K dataset (Top 50 indices among 21,841 probabilites) |
| width | integer | The width of the image |
| height | integer | The height of the image |
### Data Splits
Data was not split, since the evaluation was expected to be performed on more widely used downstream task(s).
## Dataset Creation
### Curation Rationale
We labeled subset of COYO-700M with a large model (efficientnetv2-xl) trained on imagenet-21k. Data sampling was done with a size similar to jft-300m, filtered by a specific threshold for probabilities for the top-1 label.
### Source Data
[COYO-700M](https://huggingface.co/datasets/kakaobrain/coyo-700m)
#### Who are the source language producers?
[Common Crawl](https://commoncrawl.org/) is the data source for COYO-700M.
### Annotations
#### Annotation process
The dataset was built in a fully automated process that did not require human annotation.
#### Who are the annotators?
No human annotation
### Personal and Sensitive Information
The basic instruction, licenses and contributors are the same as for the [coyo-700m](https://huggingface.co/datasets/kakaobrain/coyo-700m).
|
lcolok | null | null | null | false | null | false | lcolok/Asian_Regularization_images | 2022-11-10T07:07:10.000Z | null | false | 0d7f9fd522ab3d00f91cfff921cadfefdb25f0aa | [] | [
"license:creativeml-openrail-m"
] | https://huggingface.co/datasets/lcolok/Asian_Regularization_images/resolve/main/README.md | ---
license: creativeml-openrail-m
---
|
pixta-ai | null | null | null | false | null | false | pixta-ai/e-commerce-apparel-dataset-for-ai-ml | 2022-11-10T08:08:07.000Z | null | false | f785ee4d5d396f2dc4d41a40115f20c26febc145 | [] | [
"license:other"
] | https://huggingface.co/datasets/pixta-ai/e-commerce-apparel-dataset-for-ai-ml/resolve/main/README.md | ---
license: other
---
# 1. Overview
This dataset is a collection of 5,000+ images of clothing & apparels set that are ready to use for optimizing the accuracy of computer vision models. All of the contents is sourced from PIXTA's stock library of 100M+ Asian-featured images and videos. PIXTA is the largest platform of visual materials in the Asia Pacific region offering fully-managed services, high quality contents and data, and powerful tools for businesses & organisations to enable their creative and machine learning projects.
# 2. Use case
The e-commerce apparel dataset could be used for various AI & Computer Vision models: Product Visual Search, Similar Product Recommendation, Product Catalog,... Each data set is supported by both AI and human review process to ensure labelling consistency and accuracy. Contact us for more custom datasets.
# 3. About PIXTA
PIXTASTOCK is the largest Asian-featured stock platform providing data, contents, tools and services since 2005. PIXTA experiences 15 years of integrating advanced AI technology in managing, curating, processing over 100M visual materials and serving global leading brands for their creative and data demands. Visit us at https://www.pixta.ai/ or contact via our email contact@pixta.ai." |
pixta-ai | null | null | null | false | null | false | pixta-ai/mixed-race-children-face-recognition | 2022-11-10T08:10:57.000Z | null | false | f040f7d4760d2ba326d0343355b54ad891b6b225 | [] | [
"license:other"
] | https://huggingface.co/datasets/pixta-ai/mixed-race-children-face-recognition/resolve/main/README.md | ---
license: other
---
# 1. Overview
This dataset is a collection of 5,000+ images of mixed race children face that are ready to use for optimizing the accuracy of computer vision models. All of the contents is sourced from PIXTA's stock library of 100M+ Asian-featured images and videos. PIXTA is the largest platform of visual materials in the Asia Pacific region offering fully-managed services, high quality contents and data, and powerful tools for businesses & organizations to enable their creative and machine learning projects.
# 2. Use case
The 5,000+ images of children face could be used for various AI & Computer Vision models: Face Recognition, Smart Homes, Security Solutions, Class Attendance Monitoring,... Each data set is supported by both AI and human review process to ensure labelling consistency and accuracy. Contact us for more custom datasets.
# 3. About PIXTA
PIXTASTOCK is the largest Asian-featured stock platform providing data, contents, tools and services since 2005. PIXTA experiences 15 years of integrating advanced AI technology in managing, curating, processing over 100M visual materials and serving global leading brands for their creative and data demands. Visit us at https://www.pixta.ai/ or contact via our email contact@pixta.ai." |
KETI-AIR | null | There is no citation information | # 생활 및 거주환경 기반 VQA
## 소개
(대전시 유성구)국내 환경에 맞는 다양한 VQA 기반 AI서비스 개발을 위한 생활 및 거주환경 VQA AI데이터
## 구축목적
- 어린이, 노인, 개인의 일상생활을 촬영한 이미지에 대하여 시각정보에 대한 객관적인 상황이나 추론 가능한 질문에 대해 스스로 답변이 가능한 인공지능을 훈련하기 위한 데이터 셋
## 활용분야
- 시각 정보에 대한 인공지능 자유 묘사, 이미지를 통한 상황 유추 등이 가능한 한국형 AI 시각지능 모델 개발
## 소개
- 한국인의 실생활 속에서 다양한 이미지를 촬영하고, 연관된 질의응답 데이터를 생성하여 인공지능이 생활환경 속 물체나 위험요소 등에 대하여 답변할 수 있도록 훈련할 수 있는 데이터셋. 이미지에 대한 비식별화 및 정제 처리 후 가공, 검증을 진행하여 촬영된 사진에서 개인정보 침해 문제를 해결하고 가공을 수행하였음
## 구축 내용 및 제공 데이터량
- 일상생활 속 이미지 1,063,340장(일반 촬영 961,068장 / 3D 공간 스캔 기반 추출 이미지 102,272장)
- 이미지별 질의응답 텍스트 총 7,119,756건(이미지당 평균 7건)
## Usage
```python
from datasets import load_dataset
raw_datasets = load_dataset(
"aihub_living_env_vqa.py",
"default",
cache_dir="huggingface_datasets",
data_dir="data",
ignore_verifications=True,
)
dataset_train = raw_datasets["train"]
for item in dataset_train:
print(item)
exit()
```
## 데이터 관련 문의처
| 담당자명 | 전화번호 | 이메일 |
| ------------- | ------------- | ------------- |
| 나현우(유클리드소프트) | 042-488-6589 | hwna@euclidsoft.co.kr |
## Copyright
### 데이터 소개
AI 허브에서 제공되는 인공지능 학습용 데이터(이하 ‘AI데이터’라고 함)는 과학기술정보통신부와 한국지능정보사회진흥원의 「지능정보산업 인프라 조성」 사업의 일환으로 구축되었으며, 본 사업의 유‧무형적 결과물인 데이터, AI 응용모델 및 데이터 저작도구의 소스, 각종 매뉴얼 등(이하 ‘AI데이터 등’)에 대한 일체의 권리는 AI데이터 등의 구축 수행기관 및 참여기관(이하 ‘수행기관 등’)과 한국지능정보사회진흥원에 있습니다.
본 AI데이터 등은 인공지능 기술 및 제품·서비스 발전을 위하여 구축하였으며, 지능형 제품・서비스, 챗봇 등 다양한 분야에서 영리적・비영리적 연구・개발 목적으로 활용할 수 있습니다.
### 데이터 이용정책
- 본 AI데이터 등을 이용하기 위해서 다음 사항에 동의하며 준수해야 함을 고지합니다.
1. 본 AI데이터 등을 이용할 때에는 반드시 한국지능정보사회진흥원의 사업결과임을 밝혀야 하며, 본 AI데이터 등을 이용한 2차적 저작물에도 동일하게 밝혀야 합니다.
2. 국외에 소재하는 법인, 단체 또는 개인이 AI데이터 등을 이용하기 위해서는 수행기관 등 및 한국지능정보사회진흥원과 별도로 합의가 필요합니다.
3. 본 AI데이터 등의 국외 반출을 위해서는 수행기관 등 및 한국지능정보사회진흥원과 별도로 합의가 필요합니다.
4. 본 AI데이터는 인공지능 학습모델의 학습용으로만 사용할 수 있습니다. 한국지능정보사회진흥원은 AI데이터 등의 이용의 목적이나 방법, 내용 등이 위법하거나 부적합하다고 판단될 경우 제공을 거부할 수 있으며, 이미 제공한 경우 이용의 중지와 AI 데이터 등의 환수, 폐기 등을 요구할 수 있습니다.
5. 제공 받은 AI데이터 등을 수행기관 등과 한국지능정보사회진흥원의 승인을 받지 않은 다른 법인, 단체 또는 개인에게 열람하게 하거나 제공, 양도, 대여, 판매하여서는 안됩니다.
6. AI데이터 등에 대해서 제 4항에 따른 목적 외 이용, 제5항에 따른 무단 열람, 제공, 양도, 대여, 판매 등의 결과로 인하여 발생하는 모든 민・형사 상의 책임은 AI데이터 등을 이용한 법인, 단체 또는 개인에게 있습니다.
7. 이용자는 AI 허브 제공 데이터셋 내에 개인정보 등이 포함된 것이 발견된 경우, 즉시 AI 허브에 해당 사실을 신고하고 다운로드 받은 데이터셋을 삭제하여야 합니다.
8. AI 허브로부터 제공받은 비식별 정보(재현정보 포함)를 인공지능 서비스 개발 등의 목적으로 안전하게 이용하여야 하며, 이를 이용해서 개인을 재식별하기 위한 어떠한 행위도 하여서는 안됩니다.
9. 향후 한국지능정보사회진흥원에서 활용사례・성과 등에 관한 실태조사를 수행 할 경우 이에 성실하게 임하여야 합니다.
### 데이터 다운로드 신청방법
1. AI 허브를 통해 제공 중인 AI데이터 등을 다운로드 받기 위해서는 별도의 신청자 본인 확인과 정보 제공, 목적을 밝히는 절차가 필요합니다.
2. AI데이터를 제외한 데이터 설명, 저작 도구 등은 별도의 신청 절차나 로그인 없이 이용이 가능합니다.
3. 한국지능정보사회진흥원이 권리자가 아닌 AI데이터 등은 해당 기관의 이용정책과 다운로드 절차를 따라야 하며 이는 AI 허브와 관련이 없음을 알려 드립니다. | false | 1 | false | KETI-AIR/aihub_living_env_vqa | 2022-11-11T01:37:49.000Z | null | false | cb94668398d1077685f48d607207c315c34ebc7c | [] | [
"license:apache-2.0"
] | https://huggingface.co/datasets/KETI-AIR/aihub_living_env_vqa/resolve/main/README.md | ---
license: apache-2.0
---
|
KETI-AIR | null | There is no citation information | # 시각정보 기반 질의응답
## 소개
이미지와 이미지에 대한 질문과 대답으로 구성된 시각정보 기반 질의응답(Visual Question Answering, VQA) 데이터셋을 구축하여 시각정보 기반 질의응답 기술 연구의 학습용 데이터셋으로 활용 가능한 이미지 데이터 제공
## 구축목적
시각정보기반 질의응답 데이터셋을 구축하고 시각장애자 지원용 질의응답 서비스 시범 모델을 개발
## 활용분야
- 소방 안전분야(소방 규정 미흡 자동 확인 서비스), 생활 안전분야(기각정보 기반 위험인지), 육아 보조(위험 객체 분석 알림), 시각장애인 보행 보조 어플리케이션(TTS 알림 어플), 실내 기구 가상 배치 서비스(3D공간 스캔), 건축 설계분야(스캔맵을 통한 도면화), 스마트행정(독거노인 응급 상황 신고 서비스) 등
## 소개
- 시각정보기반 질의응답(Visual Question Answering, VQA) 기술연구의 학습용 데이터셋을 이미지와 이미지에 대한 질문과 대답으로 구성된 시각정보기반 질의응답 데이터셋 구축
## 구축 내용 및 제공 데이터량
- 시각장애인 생활 공간 이미지 데이터 35만개 사용
- 총 135만장으로 질문,답변 데이터셋 750만개 구축
## Usage
```python
from datasets import load_dataset
raw_datasets = load_dataset(
"aihub_visual_info_vqa.py",
"default",
cache_dir="huggingface_datasets",
data_dir="data",
ignore_verifications=True,
)
dataset_train = raw_datasets["train"]
for item in dataset_train:
print(item)
exit()
```
## 데이터 관련 문의처
| 담당자명 | 전화번호 | 이메일 |
| ------------- | ------------- | ------------- |
| 안성빈(유클리드소프트) | 042-488-6589 | sbahn@euclidsoft.co.kr |
## Copyright
### 데이터 소개
AI 허브에서 제공되는 인공지능 학습용 데이터(이하 ‘AI데이터’라고 함)는 과학기술정보통신부와 한국지능정보사회진흥원의 「지능정보산업 인프라 조성」 사업의 일환으로 구축되었으며, 본 사업의 유‧무형적 결과물인 데이터, AI 응용모델 및 데이터 저작도구의 소스, 각종 매뉴얼 등(이하 ‘AI데이터 등’)에 대한 일체의 권리는 AI데이터 등의 구축 수행기관 및 참여기관(이하 ‘수행기관 등’)과 한국지능정보사회진흥원에 있습니다.
본 AI데이터 등은 인공지능 기술 및 제품·서비스 발전을 위하여 구축하였으며, 지능형 제품・서비스, 챗봇 등 다양한 분야에서 영리적・비영리적 연구・개발 목적으로 활용할 수 있습니다.
### 데이터 이용정책
- 본 AI데이터 등을 이용하기 위해서 다음 사항에 동의하며 준수해야 함을 고지합니다.
1. 본 AI데이터 등을 이용할 때에는 반드시 한국지능정보사회진흥원의 사업결과임을 밝혀야 하며, 본 AI데이터 등을 이용한 2차적 저작물에도 동일하게 밝혀야 합니다.
2. 국외에 소재하는 법인, 단체 또는 개인이 AI데이터 등을 이용하기 위해서는 수행기관 등 및 한국지능정보사회진흥원과 별도로 합의가 필요합니다.
3. 본 AI데이터 등의 국외 반출을 위해서는 수행기관 등 및 한국지능정보사회진흥원과 별도로 합의가 필요합니다.
4. 본 AI데이터는 인공지능 학습모델의 학습용으로만 사용할 수 있습니다. 한국지능정보사회진흥원은 AI데이터 등의 이용의 목적이나 방법, 내용 등이 위법하거나 부적합하다고 판단될 경우 제공을 거부할 수 있으며, 이미 제공한 경우 이용의 중지와 AI 데이터 등의 환수, 폐기 등을 요구할 수 있습니다.
5. 제공 받은 AI데이터 등을 수행기관 등과 한국지능정보사회진흥원의 승인을 받지 않은 다른 법인, 단체 또는 개인에게 열람하게 하거나 제공, 양도, 대여, 판매하여서는 안됩니다.
6. AI데이터 등에 대해서 제 4항에 따른 목적 외 이용, 제5항에 따른 무단 열람, 제공, 양도, 대여, 판매 등의 결과로 인하여 발생하는 모든 민・형사 상의 책임은 AI데이터 등을 이용한 법인, 단체 또는 개인에게 있습니다.
7. 이용자는 AI 허브 제공 데이터셋 내에 개인정보 등이 포함된 것이 발견된 경우, 즉시 AI 허브에 해당 사실을 신고하고 다운로드 받은 데이터셋을 삭제하여야 합니다.
8. AI 허브로부터 제공받은 비식별 정보(재현정보 포함)를 인공지능 서비스 개발 등의 목적으로 안전하게 이용하여야 하며, 이를 이용해서 개인을 재식별하기 위한 어떠한 행위도 하여서는 안됩니다.
9. 향후 한국지능정보사회진흥원에서 활용사례・성과 등에 관한 실태조사를 수행 할 경우 이에 성실하게 임하여야 합니다.
### 데이터 다운로드 신청방법
1. AI 허브를 통해 제공 중인 AI데이터 등을 다운로드 받기 위해서는 별도의 신청자 본인 확인과 정보 제공, 목적을 밝히는 절차가 필요합니다.
2. AI데이터를 제외한 데이터 설명, 저작 도구 등은 별도의 신청 절차나 로그인 없이 이용이 가능합니다.
3. 한국지능정보사회진흥원이 권리자가 아닌 AI데이터 등은 해당 기관의 이용정책과 다운로드 절차를 따라야 하며 이는 AI 허브와 관련이 없음을 알려 드립니다. | false | 1 | false | KETI-AIR/aihub_visual_info_vqa | 2022-11-10T09:58:04.000Z | null | false | 61f035be1be19394fd41ca836fb5cfd7b183a424 | [] | [
"license:apache-2.0"
] | https://huggingface.co/datasets/KETI-AIR/aihub_visual_info_vqa/resolve/main/README.md | ---
license: apache-2.0
---
|
KETI-AIR | null | @inproceedings{kvqa,
author = "Kim, Jin-Hwa and Lim, Soohyun and Park, Jaesun and Cho, Hansu" ,
title = "Korean Localization of Visual Question Answering for Blind People",
year = "2019",
maintitle = "NeurIPS",
booktitle = "AI for Social Good workshop",
} | # Visual question answering
VQA understands a provided image and if a person asks question about this, it provides an answer after analyzing (or reasoning) the image via natural language.
# KVQA dataset
As part of T-Brain’s projects on social value, KVQA dataset, a Korean version of VQA dataset was created. KVQA dataset consists of photos taken by Korean visually impaired people, questions about the photos, and 10 answers from 10 distinct annotators for each question.
Currently, it consists of 30,000 sets of images and questions, and 300,000 answers, but by the end of this year, we will increase the dataset size to 100,000 sets of images and questions, and 1 million answers.
This dataset can be used only for educational and research purposes. Please refer to the attached license for more details. We hope that the KVQA dataset can simultaneously provide opportunities for the development of Korean VQA technology as well as creation of meaningful social value in Korean society.
You can download KVQA dataset via [this link](https://drive.google.com/drive/folders/1IQazOJtNTBql51woveN4zb6NplxH7eVl?usp=sharing).
## Evaluation
We measure the model's accuracy by using answers collected from 10 different people for each question. If the answer provided by a VQA model is equal to 3 or more answers from 10 annotators, it gets 100%; if less than 3, it gets a partial score proportionately. To be consistent with ‘human accuracies’, measured accuracies are averaged over all 10 choose 9 sets of human annotators. Please refer to [VQA Evaluation](https://visualqa.org/evaluation.html) which we follow.
## Usage
```python
from datasets import load_dataset
raw_datasets = load_dataset(
"kvqa.py",
"default",
cache_dir="huggingface_datasets",
data_dir="data",
ignore_verifications=True,
)
dataset_train = raw_datasets["train"]
for item in dataset_train:
print(item)
exit()
```
## Data statistics
### v1.0 (Jan. 2020)
| | Overall (%) | Yes/no (%) | Number (%) | Etc (%) | Unanswerable (%) |
|:------------|:---------------|:-------------|:-------------|:---------------|:-----------------|
| # images | 100,445 (100) | 6,124 (6.10) | 9,332 (9.29) | 69,069 (68.76) | 15,920 (15.85) |
| # questions | 100,445 (100) | 6,124 (6.10) | 9,332 (9.29) | 69,069 (68.76) | 15,920 (15.85) |
| # answers | 1,004,450 (100)| 61,240 (6.10)| 93,320 (9.29)| 690,690 (68.76)| 159,200 (15.85) |
## Data
### Data field description
| Name | Type | Description |
|:---------------------------------|:---------|:---------------------------------------------------------------|
| VQA | `[dict]` | `list` of `dict` holding VQA data |
| +- image | `str` | filename of image |
| +- source | `str` | data source `["kvqa" | "vizwiz"]` |
| +- answers | `[dict]` | `list` of `dict` holding 10 answers |
| +--- answer | `str` | answer in `string` |
| +--- answer_confidence | `str` | `["yes" | "maybe" | "no"]` |
| +- question | `str` | question about the image |
| +- answerable | `int` | answerable? `[0 | 1]` |
| +- answer_type | `str` | answer type `["number" | "yes/no" | "unanswerable" | "other"]` |
### Data example
```json
[{
"image": "KVQA_190712_00143.jpg",
"source": "kvqa",
"answers": [{
"answer": "피아노",
"answer_confidence": "yes"
}, {
"answer": "피아노",
"answer_confidence": "yes"
}, {
"answer": "피아노 치고있다",
"answer_confidence": "maybe"
}, {
"answer": "unanswerable",
"answer_confidence": "maybe"
}, {
"answer": "게임",
"answer_confidence": "maybe"
}, {
"answer": "피아노 앞에서 무언가를 보고 있음",
"answer_confidence": "maybe"
}, {
"answer": "피아노치고있어",
"answer_confidence": "maybe"
}, {
"answer": "피아노치고있어요",
"answer_confidence": "maybe"
}, {
"answer": "피아노 연주",
"answer_confidence": "maybe"
}, {
"answer": "피아노 치기",
"answer_confidence": "yes"
}],
"question": "방에 있는 사람은 지금 뭘하고 있지?",
"answerable": 1,
"answer_type": "other"
},
{
"image": "VizWiz_train_000000008148.jpg",
"source": "vizwiz",
"answers": [{
"answer": "리모컨",
"answer_confidence": "yes"
}, {
"answer": "리모컨",
"answer_confidence": "yes"
}, {
"answer": "리모컨",
"answer_confidence": "yes"
}, {
"answer": "티비 리모컨",
"answer_confidence": "yes"
}, {
"answer": "리모컨",
"answer_confidence": "yes"
}, {
"answer": "리모컨",
"answer_confidence": "yes"
}, {
"answer": "리모컨",
"answer_confidence": "yes"
}, {
"answer": "리모컨",
"answer_confidence": "maybe"
}, {
"answer": "리모컨",
"answer_confidence": "yes"
}, {
"answer": "리모컨",
"answer_confidence": "yes"
}],
"question": "이것은 무엇인가요?",
"answerable": 1,
"answer_type": "other"
}
]
``` | false | 322 | false | KETI-AIR/kvqa | 2022-11-10T09:58:40.000Z | null | false | 853470d118146bd1efd05a12e41e09838c74c7b7 | [] | [
"license:apache-2.0"
] | https://huggingface.co/datasets/KETI-AIR/kvqa/resolve/main/README.md | ---
license: apache-2.0
---
|
KETI-AIR | null | ```
@InProceedings{balanced_vqa_v2,
author = {Yash Goyal and Tejas Khot and Douglas Summers{-}Stay and Dhruv Batra and Devi Parikh},
title = {Making the {V} in {VQA} Matter: Elevating the Role of Image Understanding in {V}isual {Q}uestion {A}nswering},
booktitle = {Conference on Computer Vision and Pattern Recognition (CVPR)},
year = {2017},
}
```
```
@InProceedings{balanced_binary_vqa,
author = {Peng Zhang and Yash Goyal and Douglas Summers{-}Stay and Dhruv Batra and Devi Parikh},
title = {{Y}in and {Y}ang: Balancing and Answering Binary Visual Questions},
booktitle = {Conference on Computer Vision and Pattern Recognition (CVPR)},
year = {2016},
}
```
```
@InProceedings{{VQA},
author = {Stanislaw Antol and Aishwarya Agrawal and Jiasen Lu and Margaret Mitchell and Dhruv Batra and C. Lawrence Zitnick and Devi Parikh},
title = {{VQA}: {V}isual {Q}uestion {A}nswering},
booktitle = {International Conference on Computer Vision (ICCV)},
year = {2015},
}
``` | # VQA
## What is VQA?
VQA is a new dataset containing open-ended questions about images. These questions require an understanding of vision, language and commonsense knowledge to answer.
- 265,016 images (COCO and abstract scenes)
- At least 3 questions (5.4 questions on average) per image
- 10 ground truth answers per question
- 3 plausible (but likely incorrect) answers per question
- Automatic evaluation metric
## Dataset
Details on downloading the latest dataset may be found on the [download webpage](https://visualqa.org/download.html).
## Usage
```python
from datasets import load_dataset
raw_datasets = load_dataset(
"vqa.py",
"base",
cache_dir="huggingface_datasets",
data_dir="data",
ignore_verifications=True,
)
dataset_train = raw_datasets["train"]
for item in dataset_train:
print(item)
exit()
```
v2 = v2.real + v2.abstract (v2.abstract == v1.abstract)
v1 = v1.real + v1.abstract
v2.abstract.balanced.bin | false | 1 | false | KETI-AIR/vqa | 2022-11-10T09:59:21.000Z | null | false | 4085d8bad777532784546b4043dfd175537a6085 | [] | [
"license:apache-2.0"
] | https://huggingface.co/datasets/KETI-AIR/vqa/resolve/main/README.md | ---
license: apache-2.0
---
|
lucadiliello | null | null | null | false | 2 | false | lucadiliello/mnli | 2022-11-10T10:08:49.000Z | null | false | 52c2eb978a809403513e188df36f895cc9067eaf | [] | [] | https://huggingface.co/datasets/lucadiliello/mnli/resolve/main/README.md | ---
dataset_info:
features:
- name: key
dtype: string
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype: int64
splits:
- name: dev_matched
num_bytes: 1869989
num_examples: 9815
- name: dev_mismatched
num_bytes: 1985345
num_examples: 9832
- name: test_matched
num_bytes: 1884664
num_examples: 9796
- name: test_mismatched
num_bytes: 1986695
num_examples: 9847
- name: train
num_bytes: 76786075
num_examples: 392702
download_size: 54416761
dataset_size: 84512768
---
# Dataset Card for "mnli"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
aarimond | null | @InProceedings{huggingface:dataset,
title = {A great new dataset},
author={huggingface, Inc.
},
year={2020}
} | This dataset contains ELF Codes | false | 8 | false | aarimond/test_elf_data | 2022-11-10T20:04:06.000Z | null | false | efd966cf0b4ac4d9922a01448336df40b183cd40 | [] | [] | https://huggingface.co/datasets/aarimond/test_elf_data/resolve/main/README.md | ---
dataset_info:
- config_name: US-DE
features:
- name: LEI
dtype: string
- name: Entity.LegalName
dtype: string
- name: Entity.LegalForm.EntityLegalFormCode
dtype:
class_label:
names:
0: HZEH
1: 4FSX
2: '8888'
3: T91T
4: 9ASJ
5: XTIQ
6: 1HXP
7: QF4W
8: TGMR
9: 12N6
10: MIPY
11: '9999'
splits:
- name: test
num_bytes: 724740
num_examples: 10922
- name: train
num_bytes: 2538400
num_examples: 38226
- name: validation
num_bytes: 362293
num_examples: 5462
download_size: 300333753
dataset_size: 3625433
- config_name: DE
features:
- name: LEI
dtype: string
- name: Entity.LegalName
dtype: string
- name: Entity.LegalForm.EntityLegalFormCode
dtype:
class_label:
names:
0: 2HBR
1: 6QQB
2: XLWA
3: '8888'
4: 8Z6G
5: FR3V
6: SGST
7: QZ3L
8: 40DB
9: V2YH
10: 63KS
11: US8E
12: SCE1
13: SQKS
14: 13AV
15: AZFE
16: T0YJ
17: OL20
18: 9JGX
19: 79H0
20: 2YZO
21: YJ4C
22: D40E
23: 8CM0
24: JNDX
25: 7J3S
26: SUA1
27: JMVF
28: YA01
29: AMKW
30: '9999'
splits:
- name: test
num_bytes: 1823136
num_examples: 27242
- name: train
num_bytes: 6363812
num_examples: 95344
- name: validation
num_bytes: 909977
num_examples: 13621
download_size: 300333753
dataset_size: 9096925
- config_name: AT
features:
- name: LEI
dtype: string
- name: Entity.LegalName
dtype: string
- name: Entity.LegalForm.EntityLegalFormCode
dtype:
class_label:
names:
0: AXSB
1: EQOV
2: '8888'
3: ONF1
4: JTAV
5: DX6Z
6: ECWU
7: 5WWO
8: 1NOX
9: E9OX
10: AAL7
11: JJYT
12: UI81
13: GVPD
14: NIJH
15: 8XDW
16: CAQ1
17: JQOI
18: O65B
19: G3R6
20: 69H1
splits:
- name: test
num_bytes: 322485
num_examples: 4905
- name: train
num_bytes: 1130264
num_examples: 17167
- name: validation
num_bytes: 161898
num_examples: 2453
download_size: 300333753
dataset_size: 1614647
- config_name: AU
features:
- name: LEI
dtype: string
- name: Entity.LegalName
dtype: string
- name: Entity.LegalForm.EntityLegalFormCode
dtype:
class_label:
names:
0: TXVC
1: ADXG
2: R4KK
3: '8888'
4: 7TPC
5: LZFR
6: Q82Q
7: BC38
8: XHCV
9: PQHL
10: J4JC
11: 6W6X
12: '9999'
splits:
- name: test
num_bytes: 203149
num_examples: 3069
- name: train
num_bytes: 711458
num_examples: 10737
- name: validation
num_bytes: 102011
num_examples: 1535
download_size: 300333753
dataset_size: 1016618
- config_name: CH
features:
- name: LEI
dtype: string
- name: Entity.LegalName
dtype: string
- name: Entity.LegalForm.EntityLegalFormCode
dtype:
class_label:
names:
0: MVII
1: '8888'
2: 7MNN
3: FJG4
4: 2JZ4
5: 54WI
6: 3EKS
7: FLNB
8: XJOT
9: H781
10: QSI2
11: DP2E
12: E0NE
13: 5BEZ
14: AZA0
15: 2B81
16: M848
17: 1BL5
18: HX77
19: CQMY
20: '9999'
21: MRSY
22: GP8M
23: FFTN
24: L5DU
25: TL87
26: 2XJA
27: W6A7
28: BF9N
splits:
- name: test
num_bytes: 173897
num_examples: 2770
- name: train
num_bytes: 607674
num_examples: 9691
- name: validation
num_bytes: 87465
num_examples: 1385
download_size: 300333753
dataset_size: 869036
- config_name: CZ
features:
- name: LEI
dtype: string
- name: Entity.LegalName
dtype: string
- name: Entity.LegalForm.EntityLegalFormCode
dtype:
class_label:
names:
0: 9HLU
1: 6CQN
2: TNBA
3: 9RVC
4: ZQO8
5: RHFQ
6: 747U
7: 6D9L
8: 3G3D
9: 95G8
10: SNWJ
11: J8PB
12: JCAD
13: CATU
14: CD28
15: IQ9O
16: HY6K
17: UFDA
18: QIEL
19: 7OZQ
20: 6FAI
21: NI3I
22: FY1B
23: QQ49
24: Q25I
25: G2I3
26: BL4B
27: '9999'
28: QJ0F
29: 5KU5
30: O9PW
31: 4UB2
32: QS6A
33: 917C
34: VIE3
35: ET6Z
36: LJL0
37: CIO8
38: T3Q1
39: OVKW
40: MAVU
41: PFE5
42: MBUU
43: HQPK
44: NQHQ
45: XG70
46: C4Q2
47: NPH3
48: '8888'
49: D1VK
50: VQU7
splits:
- name: test
num_bytes: 171802
num_examples: 2918
- name: train
num_bytes: 601943
num_examples: 10211
- name: validation
num_bytes: 85981
num_examples: 1459
download_size: 300333753
dataset_size: 859726
- config_name: DK
features:
- name: LEI
dtype: string
- name: Entity.LegalName
dtype: string
- name: Entity.LegalForm.EntityLegalFormCode
dtype:
class_label:
names:
0: H8VP
1: ZRPO
2: 9KSX
3: D4PU
4: 40R4
5: FUKI
6: 7WRN
7: 599X
8: '8888'
9: GFXN
10: NUL8
11: PIOI
12: PZ6Y
13: F7JY
14: PMJW
15: WU7R
16: 1MWR
17: 37UT
18: GULL
19: FW7S
20: 5QS7
21: '9999'
splits:
- name: test
num_bytes: 663327
num_examples: 11356
- name: train
num_bytes: 2316008
num_examples: 39743
- name: validation
num_bytes: 330461
num_examples: 5678
download_size: 300333753
dataset_size: 3309796
- config_name: EE
features:
- name: LEI
dtype: string
- name: Entity.LegalName
dtype: string
- name: Entity.LegalForm.EntityLegalFormCode
dtype:
class_label:
names:
0: 9LJA
1: JC0Y
2: PRTB
3: '8888'
4: LVEQ
5: 1NKP
6: VSEV
7: I1UP
8: 752Q
9: J34T
10: LA47
11: 3UPJ
12: 8ZQE
splits:
- name: test
num_bytes: 140406
num_examples: 2707
- name: train
num_bytes: 490420
num_examples: 9470
- name: validation
num_bytes: 70031
num_examples: 1354
download_size: 300333753
dataset_size: 700857
- config_name: ES
features:
- name: LEI
dtype: string
- name: Entity.LegalName
dtype: string
- name: Entity.LegalForm.EntityLegalFormCode
dtype:
class_label:
names:
0: 5RDO
1: S0Z5
2: DP3Q
3: FH4R
4: R6UT
5: UJ35
6: MDOL
7: '8888'
8: 8EHB
9: K0RI
10: S6MS
11: JB2M
12: 1G29
13: A97B
14: GJL1
15: QMUM
16: AXS5
17: JTV5
18: IT6N
19: 956I
20: 7U8O
21: 9FPZ
22: 1QU8
23: TUHS
24: I2WU
25: A0J6
26: S6X7
27: 4SJR
28: CUIH
29: SS0L
30: IAS6
31: ARDP
32: B0V5
33: 1SL4
34: '9999'
35: 1ZHJ
36: TDD5
37: R2L8
38: 4S57
39: AJ9U
40: DDES
41: XYGP
splits:
- name: test
num_bytes: 1051077
num_examples: 16932
- name: train
num_bytes: 3666811
num_examples: 59258
- name: validation
num_bytes: 522441
num_examples: 8466
download_size: 300333753
dataset_size: 5240329
- config_name: FI
features:
- name: LEI
dtype: string
- name: Entity.LegalName
dtype: string
- name: Entity.LegalForm.EntityLegalFormCode
dtype:
class_label:
names:
0: 5WI2
1: K6VE
2: DKUW
3: UXEW
4: '8888'
5: NV7C
6: K2G8
7: 1AFG
8: HEOB
9: YK5G
10: 8WJ7
11: XJH3
12: VOTI
13: V0TJ
14: 2RK5
15: PPMX
16: BKVI
17: 760X
18: 883O
19: BKQO
20: EE90
21: 4H61
22: DAFV
23: ZMTL
24: SJL9
25: K09E
26: R39F
27: 8HGS
28: IYF9
29: SDPE
30: 97PB
31: N3LC
32: EDZP
33: 6PEQ
34: DMT8
35: SKGX
36: Z38E
37: KHI5
38: MRW9
39: T3K4
40: HTT9
41: SQS1
42: 37GR
43: OXLO
44: R6UB
45: 9AUC
46: DL9Z
47: V42B
48: UMF0
49: '9999'
50: 1YIR
51: EMC8
splits:
- name: test
num_bytes: 400211
num_examples: 7165
- name: train
num_bytes: 1397786
num_examples: 25074
- name: validation
num_bytes: 200105
num_examples: 3583
download_size: 300333753
dataset_size: 1998102
- config_name: GB
features:
- name: LEI
dtype: string
- name: Entity.LegalName
dtype: string
- name: Entity.LegalForm.EntityLegalFormCode
dtype:
class_label:
names:
0: '8888'
1: H0PO
2: B6ES
3: G12F
4: Z0EY
5: VV0W
6: 57V7
7: AVYY
8: JTCO
9: ID30
10: XLZV
11: 7T8N
12: STX7
13: 4GJI
14: Q0M5
15: 9B78
16: 17R0
17: E12O
18: BX6Y
19: IYXU
20: WBQU
21: NBTW
22: 468Q
23: 60IF
24: 5FRT
25: 8CF0
26: ZZGG
27: 4A3J
28: '9999'
splits:
- name: test
num_bytes: 986783
num_examples: 15037
- name: train
num_bytes: 3445609
num_examples: 52626
- name: validation
num_bytes: 489952
num_examples: 7519
download_size: 300333753
dataset_size: 4922344
- config_name: HU
features:
- name: LEI
dtype: string
- name: Entity.LegalName
dtype: string
- name: Entity.LegalForm.EntityLegalFormCode
dtype:
class_label:
names:
0: P9F2
1: BKUX
2: 8VH3
3: S3DA
4: EO9F
5: M1DW
6: 8UEG
7: BJ8Q
8: BMYJ
9: TSVO
10: 2A44
11: XW5U
12: '8888'
13: DPY1
14: DN6F
15: QYV5
16: 876R
17: 4QRE
18: 4WV7
19: '9999'
20: 4C5L
21: ZQAQ
22: 2LB5
23: LNY0
24: BSK1
25: ESTU
26: V3LT
27: J6MO
28: TQ3O
29: X0SX
30: UD8K
31: Y64R
32: 995K
33: OII5
splits:
- name: test
num_bytes: 206947
num_examples: 2084
- name: train
num_bytes: 721420
num_examples: 7291
- name: validation
num_bytes: 102939
num_examples: 1042
download_size: 300333753
dataset_size: 1031306
- config_name: IE
features:
- name: LEI
dtype: string
- name: Entity.LegalName
dtype: string
- name: Entity.LegalForm.EntityLegalFormCode
dtype:
class_label:
names:
0: LGWG
1: '8888'
2: MNQ7
3: VYAX
4: JXDX
5: KMFX
6: 2GV9
7: C58S
8: DWS3
9: HNJK
10: 5AX8
11: 54SK
12: LZIC
13: URQH
14: '9999'
15: 9BPE
16: FF1D
17: ZJS8
18: 363J
splits:
- name: test
num_bytes: 248299
num_examples: 3070
- name: train
num_bytes: 865679
num_examples: 10744
- name: validation
num_bytes: 123691
num_examples: 1535
download_size: 300333753
dataset_size: 1237669
- config_name: JP
features:
- name: LEI
dtype: string
- name: Entity.LegalName
dtype: string
- name: Entity.LegalForm.EntityLegalFormCode
dtype:
class_label:
names:
0: T417
1: '8888'
2: DYQK
3: 7QQ0
4: N3JU
5: R4LR
6: '9999'
7: IUVI
8: MXMH
9: 2NRQ
10: VQLD
11: 5MVV
splits:
- name: test
num_bytes: 172342
num_examples: 1952
- name: train
num_bytes: 603558
num_examples: 6828
- name: validation
num_bytes: 86887
num_examples: 976
download_size: 300333753
dataset_size: 862787
- config_name: KY
features:
- name: LEI
dtype: string
- name: Entity.LegalName
dtype: string
- name: Entity.LegalForm.EntityLegalFormCode
dtype:
class_label:
names:
0: OSBR
1: '8888'
2: XAQA
3: 6XB7
4: MP7S
5: MPUG
6: 4XP8
7: K575
8: T5UM
9: JDX6
10: '9999'
11: SNUK
12: 8HR7
splits:
- name: test
num_bytes: 293193
num_examples: 4142
- name: train
num_bytes: 1026219
num_examples: 14495
- name: validation
num_bytes: 148206
num_examples: 2071
download_size: 300333753
dataset_size: 1467618
- config_name: LI
features:
- name: LEI
dtype: string
- name: Entity.LegalName
dtype: string
- name: Entity.LegalForm.EntityLegalFormCode
dtype:
class_label:
names:
0: TV8Y
1: TMU1
2: BSZ8
3: 7RRP
4: 1DGT
5: '8888'
6: 53QF
7: Y8LH
8: IF49
9: WAK8
10: 32HC
11: ANSR
12: 1SOY
splits:
- name: test
num_bytes: 108787
num_examples: 1880
- name: train
num_bytes: 379787
num_examples: 6578
- name: validation
num_bytes: 54055
num_examples: 940
download_size: 300333753
dataset_size: 542629
- config_name: LU
features:
- name: LEI
dtype: string
- name: Entity.LegalName
dtype: string
- name: Entity.LegalForm.EntityLegalFormCode
dtype:
class_label:
names:
0: '8888'
1: DVXS
2: 5GGB
3: U8KA
4: UDY2
5: 81G5
6: 63P9
7: AIR5
8: 2JEI
9: SQ1A
10: WCEP
11: HHR4
12: STBC
13: V19Y
14: '9999'
15: V5OS
16: 2S2U
17: ZFFA
18: ATQY
19: LCR0
20: EUT4
21: 7SIZ
22: BKAB
23: 2IGL
24: BEAN
25: 68J6
26: 9C91
27: JIWD
splits:
- name: test
num_bytes: 469705
num_examples: 6792
- name: train
num_bytes: 1643123
num_examples: 23768
- name: validation
num_bytes: 235172
num_examples: 3396
download_size: 300333753
dataset_size: 2348000
- config_name: NL
features:
- name: LEI
dtype: string
- name: Entity.LegalName
dtype: string
- name: Entity.LegalForm.EntityLegalFormCode
dtype:
class_label:
names:
0: 54M6
1: V44D
2: B5PM
3: '8888'
4: EZQW
5: JHK5
6: NFFH
7: CODH
8: 62Y3
9: L7HX
10: A0W7
11: 33MN
12: BBEB
13: 4QXM
14: '9999'
15: M1IZ
16: 9AAK
17: DEO1
18: GNXT
19: UNJ2
splits:
- name: test
num_bytes: 1060390
num_examples: 17957
- name: train
num_bytes: 3706306
num_examples: 62848
- name: validation
num_bytes: 530621
num_examples: 8979
download_size: 300333753
dataset_size: 5297317
- config_name: 'NO'
features:
- name: LEI
dtype: string
- name: Entity.LegalName
dtype: string
- name: Entity.LegalForm.EntityLegalFormCode
dtype:
class_label:
names:
0: YI42
1: LJJW
2: V06W
3: '8888'
4: IQGE
5: 3C7U
6: FSBD
7: EXD7
8: K5P8
9: 8S9H
10: GYY6
11: 4ZRR
12: 3L58
13: R71C
14: BJ65
15: M9IQ
16: O0EU
17: CF5L
18: 326Y
19: ZQ0Q
20: Q0Q1
21: PB3V
22: 9DI1
23: AEV1
24: YTMC
25: 5ZTZ
26: 50TD
splits:
- name: test
num_bytes: 349905
num_examples: 6651
- name: train
num_bytes: 1223064
num_examples: 23277
- name: validation
num_bytes: 174418
num_examples: 3326
download_size: 300333753
dataset_size: 1747387
- config_name: PL
features:
- name: LEI
dtype: string
- name: Entity.LegalName
dtype: string
- name: Entity.LegalForm.EntityLegalFormCode
dtype:
class_label:
names:
0: FJ0E
1: O7XB
2: RBHP
3: BSJT
4: ZVVM
5: '8888'
6: OMX0
7: 629I
8: KM66
9: H7OD
10: 8TOF
11: WUJ2
12: T7PB
13: 96XK
14: ZZKE
15: 13ZV
16: LT9U
17: 3BJG
18: SVA3
19: SP4S
20: AL9T
21: B21W
22: 60BG
23: RUCO
24: JCKO
25: J3A3
26: WNX1
27: QUX1
28: FQ5Y
29: 5F76
30: WOK7
31: QYL4
32: GZE5
33: SMIS
34: CY1M
35: YLZL
splits:
- name: test
num_bytes: 331549
num_examples: 4048
- name: train
num_bytes: 1164275
num_examples: 14167
- name: validation
num_bytes: 168331
num_examples: 2024
download_size: 300333753
dataset_size: 1664155
- config_name: SE
features:
- name: LEI
dtype: string
- name: Entity.LegalName
dtype: string
- name: Entity.LegalForm.EntityLegalFormCode
dtype:
class_label:
names:
0: XJHM
1: CX05
2: '8888'
3: BEAY
4: BYQJ
5: 1TN0
6: OJ9I
7: C61P
8: 2UAX
9: AZTO
10: O1QI
11: SSOM
12: G04R
13: M0Y0
14: '9999'
15: WZDB
16: PDQ0
splits:
- name: test
num_bytes: 566233
num_examples: 9625
- name: train
num_bytes: 1978495
num_examples: 33687
- name: validation
num_bytes: 282253
num_examples: 4813
download_size: 300333753
dataset_size: 2826981
- config_name: US-CA
features:
- name: LEI
dtype: string
- name: Entity.LegalName
dtype: string
- name: Entity.LegalForm.EntityLegalFormCode
dtype:
class_label:
names:
0: '8888'
1: 5HQ4
2: H1UM
3: EI4J
4: K7YU
5: SQ7B
6: PZR6
7: 7CDL
8: G1P6
9: CVXK
10: KQXA
11: 4JCS
12: BADE
13: '9999'
splits:
- name: test
num_bytes: 79126
num_examples: 1233
- name: train
num_bytes: 275962
num_examples: 4315
- name: validation
num_bytes: 39591
num_examples: 617
download_size: 300333753
dataset_size: 394679
- config_name: US-NY
features:
- name: LEI
dtype: string
- name: Entity.LegalName
dtype: string
- name: Entity.LegalForm.EntityLegalFormCode
dtype:
class_label:
names:
0: '8888'
1: 51RC
2: PJ10
3: SDX0
4: XIZI
5: BO6L
6: 4VH5
7: '9999'
8: M0ER
9: EPCY
splits:
- name: test
num_bytes: 60357
num_examples: 952
- name: train
num_bytes: 211229
num_examples: 3331
- name: validation
num_bytes: 30484
num_examples: 476
download_size: 300333753
dataset_size: 302070
- config_name: VG
features:
- name: LEI
dtype: string
- name: Entity.LegalName
dtype: string
- name: Entity.LegalForm.EntityLegalFormCode
dtype:
class_label:
names:
0: 6EH6
1: '8888'
2: YOP9
3: '9999'
4: Q62B
5: ZHED
6: GLCI
7: N28C
8: BST2
9: JS65
splits:
- name: test
num_bytes: 185500
num_examples: 3048
- name: train
num_bytes: 649068
num_examples: 10666
- name: validation
num_bytes: 92764
num_examples: 1524
download_size: 300333753
dataset_size: 927332
---
# Dataset Card for "ELF Codes"
---------------
<h1 align="center">
<a href="https://gleif.org">
<img src="http://sdglabs.ai/wp-content/uploads/2022/07/gleif-logo-new.png" width="220px" style="display: inherit">
</a>
</h1><br>
<h3 align="center">in collaboration with</h3>
<h1 align="center">
<a href="https://sociovestix.com">
<img src="https://sociovestix.com/img/svl_logo_centered.svg" width="700px" style="width: 100%">
</a>
</h1><br>
---------------
## Table of Contents
- [Dataset Card Creation Guide](#dataset-card-creation-guide)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Licensing Information](#licensing-information)
## Dataset Description
- **Homepage:** [gleif.org](https://gleif.org)
- **Repository:** [The LENU project](https://github.com/Sociovestix/lenu)
- **Point of Contact:** [aarimond](https://huggingface.co/aarimond)
### Dataset Summary
This dataset contains Legal Entity names from the
[Legal Entity Identifier](https://www.gleif.org/en/about-lei/introducing-the-legal-entity-identifier-lei)
(LEI) Standard (ISO 17441)
along with their corresponding [Entity Legal Form (ELF) Codes](https://www.gleif.org/en/about-lei/code-lists/iso-20275-entity-legal-forms-code-list) (Standard ISO 20275).
The dataset has been created as part of a collaboration of the [Global Legal Entity Identifier Foundation](https://gleif.org) (GLEIF) and
[Sociovestix Labs](https://sociovestix.com) with the goal to explore how Machine Learning can support in detecting the legal form (ELF Code) from a legal name.
See also the open source python library [lenu](https://github.com/Sociovestix/lenu) which supports in this task.
The data is created from LEI data downloaded from [GLEIF's public website](https://www.gleif.org/en/lei-data/gleif-golden-copy/download-the-golden-copy/) (Date: 2022-11-01 00:00).
It is divided into subsets for different (major) Legal Jurisdictions,
each Jurisdiction having their own set of ELF Codes. The ELF Code reference list can be downloaded [here](https://www.gleif.org/en/about-lei/code-lists/iso-20275-entity-legal-forms-code-list).
### Languages
The data contains several major Jurisdictions (e.g. US-DE (US Delaware), GB (Great Britain), DE (Germany) and others).
Legal Entity names usually follow certain language patterns, depending on which Jurisdiction they are located.
Thus, it makes sense to use models that are pre-trained on the corresponding language.
## Dataset Structure
### Data Instances
The data contains of the LEI, the corresponding Legal Name and Entity Legal Form (ELF) Code.
```
{
'LEI': '254900OMZ079O2SDWA75',
'Entity.LegalName': 'Park Reseda Mortgage LLC',
'Entity.LegalForm.EntityLegalFormCode': 0
}
```
### Data Fields
A detailed description of the fields can be found
This is just a subset of available fields in the LEI system. All fields are described in detail in GLEIF's
[LEI Common Data Format (CDF)](https://www.gleif.org/en/about-lei/common-data-file-format/current-versions/level-1-data-lei-cdf-3-1-format).
- `LEI`: The [Legal Entity Identifier](https://www.gleif.org/en/about-lei/introducing-the-legal-entity-identifier-lei) Code. Uniquely identifies a Legal Entity.
- `Entity.LegalName`: The official name of the legal entity as registered in the LEI system.
- `Entity.LegalForm.EntityLegalFormCode`: class encoded column which contains the [Entity Legal Form Code](https://www.gleif.org/en/about-lei/code-lists/iso-20275-entity-legal-forms-code-list)
### Data Splits
We have divided each Jurisdiction's subset into stratified train (70%), validation (10%) and test (20%) splits.
ELF Codes that appear less than three times in a Jurisdiction have been removed.
## Licensing Information
LEI data is available under Creative Commons (CC0) license.
See [gleif.org/en/about/open-data](https://gleif.org/en/about/open-data).
|
AlekseyKorshuk | null | null | null | false | 439 | false | AlekseyKorshuk/dalio-handwritten-io | 2022-11-10T11:41:00.000Z | null | false | 57f637d30f7a4c5ff44ecd64a63763179bd824e5 | [] | [] | https://huggingface.co/datasets/AlekseyKorshuk/dalio-handwritten-io/resolve/main/README.md | ---
dataset_info:
features:
- name: input_text
dtype: string
- name: output_text
dtype: string
splits:
- name: test
num_bytes: 14786
num_examples: 10
- name: train
num_bytes: 186546
num_examples: 156
- name: validation
num_bytes: 31729
num_examples: 29
download_size: 114870
dataset_size: 233061
---
# Dataset Card for "dalio-handwritten-io"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
AlekseyKorshuk | null | null | null | false | 16 | false | AlekseyKorshuk/dalio-handwritten-complete | 2022-11-10T11:41:36.000Z | null | false | b407d59e558e452bf6bc72f3365d4a622c7fe4f7 | [] | [] | https://huggingface.co/datasets/AlekseyKorshuk/dalio-handwritten-complete/resolve/main/README.md | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: test
num_bytes: 11957
num_examples: 10
- name: train
num_bytes: 80837
num_examples: 55
- name: validation
num_bytes: 13340
num_examples: 10
download_size: 79024
dataset_size: 106134
---
# Dataset Card for "dalio-handwritten-complete"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
AlekseyKorshuk | null | null | null | false | 89 | false | AlekseyKorshuk/dalio-synthetic-io | 2022-11-10T11:44:04.000Z | null | false | 248a2ed0252e2ff647f27fe49276a697a9c583ab | [] | [] | https://huggingface.co/datasets/AlekseyKorshuk/dalio-synthetic-io/resolve/main/README.md | ---
dataset_info:
features:
- name: input_text
dtype: string
- name: output_text
dtype: string
splits:
- name: test
num_bytes: 34283
num_examples: 19
- name: train
num_bytes: 483245
num_examples: 303
- name: validation
num_bytes: 84125
num_examples: 57
download_size: 299043
dataset_size: 601653
---
# Dataset Card for "dalio-synthetic-io"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
AlekseyKorshuk | null | null | null | false | null | false | AlekseyKorshuk/dalio-synthetic-complete | 2022-11-10T11:44:30.000Z | null | false | 0ee966aee92c0ceb06da61cb67cb0b8a5261785d | [] | [] | https://huggingface.co/datasets/AlekseyKorshuk/dalio-synthetic-complete/resolve/main/README.md | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: test
num_bytes: 24972
num_examples: 19
- name: train
num_bytes: 209033
num_examples: 118
- name: validation
num_bytes: 48527
num_examples: 22
download_size: 165396
dataset_size: 282532
---
# Dataset Card for "dalio-synthetic-complete"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
AlekseyKorshuk | null | null | null | false | 173 | false | AlekseyKorshuk/dalio-all-io | 2022-11-10T11:45:09.000Z | null | false | a6415c44a59cc8dcfbf1aa722cc45c8a87e2819c | [] | [] | https://huggingface.co/datasets/AlekseyKorshuk/dalio-all-io/resolve/main/README.md | ---
dataset_info:
features:
- name: input_text
dtype: string
- name: output_text
dtype: string
splits:
- name: test
num_bytes: 40070
num_examples: 29
- name: train
num_bytes: 676060
num_examples: 459
- name: validation
num_bytes: 118584
num_examples: 86
download_size: 399681
dataset_size: 834714
---
# Dataset Card for "dalio-all-io"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
AlekseyKorshuk | null | null | null | false | 2 | false | AlekseyKorshuk/dalio-all-complete | 2022-11-10T11:45:33.000Z | null | false | b6c482ef27596ffcd34956b45eedf37b1ccfc5cb | [] | [] | https://huggingface.co/datasets/AlekseyKorshuk/dalio-all-complete/resolve/main/README.md | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: test
num_bytes: 28784
num_examples: 29
- name: train
num_bytes: 302691
num_examples: 173
- name: validation
num_bytes: 54939
num_examples: 33
download_size: 210354
dataset_size: 386414
---
# Dataset Card for "dalio-all-complete"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
cakiki | null | null | null | false | 2 | false | cakiki/shell_paths | 2022-11-10T12:04:28.000Z | null | false | 5f22c8d924620cb0aed0dbb6fcd488b98c1b79e6 | [] | [] | https://huggingface.co/datasets/cakiki/shell_paths/resolve/main/README.md | ---
dataset_info:
features:
- name: repository_name
dtype: string
splits:
- name: train
num_bytes: 99354502
num_examples: 3657232
download_size: 82635721
dataset_size: 99354502
---
# Dataset Card for "shell_paths"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
cakiki | null | null | null | false | 1 | false | cakiki/cmake_paths | 2022-11-10T12:05:55.000Z | null | false | 9fd38e27d47abd2e31ea9449d0a3244ef9cdb9e5 | [] | [] | https://huggingface.co/datasets/cakiki/cmake_paths/resolve/main/README.md | ---
dataset_info:
features:
- name: repository_name
dtype: string
splits:
- name: train
num_bytes: 14898478
num_examples: 559316
download_size: 7920865
dataset_size: 14898478
---
# Dataset Card for "cmake_paths"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
cakiki | null | null | null | false | 1 | false | cakiki/cpp_paths | 2022-11-10T12:11:49.000Z | null | false | 8d7956373a46b61d5dbbc93eaafac34dbec7f442 | [] | [] | https://huggingface.co/datasets/cakiki/cpp_paths/resolve/main/README.md | ---
dataset_info:
features:
- name: repository_name
dtype: string
splits:
- name: train
num_bytes: 339979633
num_examples: 13541537
download_size: 250743754
dataset_size: 339979633
---
# Dataset Card for "cpp_paths"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
cakiki | null | null | null | false | 1 | false | cakiki/dockerfile_paths | 2022-11-10T12:12:39.000Z | null | false | d9fabc34754e7840bbeaae7c93e51ebee7163cf5 | [] | [] | https://huggingface.co/datasets/cakiki/dockerfile_paths/resolve/main/README.md | ---
dataset_info:
features:
- name: repository_name
dtype: string
splits:
- name: train
num_bytes: 36265516
num_examples: 1274173
download_size: 23300431
dataset_size: 36265516
---
# Dataset Card for "dockerfile_paths"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
openclimatefix | null | null | null | false | null | false | openclimatefix/arco-era5 | 2022-11-10T12:15:34.000Z | null | false | 92274710c3b10948f908f2bcc6ad18d4ae46fcbe | [] | [
"license:apache-2.0"
] | https://huggingface.co/datasets/openclimatefix/arco-era5/resolve/main/README.md | ---
license: apache-2.0
---
This dataset simply loads Google's Analysis-Ready Cloud Optimized ERA5 Reanalysis dataset from Google Public Datasets. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.